With neuroscience being the only non-industrial discipline to get any major boost in funding from the Harper government, and without any direct explanation of how such a basic science could possibly creat “jobs, growth and long-term prosperity for Canadian families,” its about time there is some coverage of the new Brain Canada funding organization in the media. Have a read here

Facebook, scourge or savior? Love it or hate it, Facebook is pervasive and arguably pretty useful. Most recently though, the social network has made its way into the world of scientific research. Science mag recently published a story about a group of evolutionary biologists whose summer of research was saved by Facebook. These scientists had been collecting fishes in Guyana and ended up having only 24 hours to identify something like 5000 fish for export back to Toronto. To complete the 24h emergency identification of the fish before they got the boot from Guyana, the group posted a bunch of photos to Facebook so their friends around the world could help with the classification, and managed to pull through with help from their social network.

In your faces, F-Bo pessimists.

You can see some of the photos that I assume have been left on Facebook for posterity here.

Although this group didn’t really take it all the way, this is sort of an example of how putting your data out there can be of benefit. For more on the Open Science movement, check out Michael Nielsen’s talk at TedX Waterloo.

PS I had written this a month ago and forgotten about it until I heard the story retold by Canadian hiphopper cum radio personality, Rich Terfry (aka Buck 65) of CBC Radio2 Drive. Have a listen to the hiphop or the radio show if it suits you.

Following up on my previous post about inner hair cells in the ear, I wanted to share this video. It’s of an outter hair cell, and the particulars are in the link, but the long and short is that these cells actually move to a beat. The cell in the video is actually moving in time with the music that you hear.

Incidentally, a professor of mine once explained to me that there are also neurons in the spinal cord/brain stem that fire alternating messages to your leg muscles when you hear a beat. Taking a bit of irresponsibly sensationalist license, I take this to mean that there are dance circuits set up in your legs. So next time you feel like dancing, don’t hide it, let nature take its course and do it unapologetically. Oddly, these neurons only fire on the off beat, which tells me that we should have seen the likes of jazz and bluegrass cropping up a lot earlier. Maybe classicists just didn’t listen to their legs enough.

Neuroscience is on the brink of a revolution. A brain, fully reconstructed on the cellular scale, is the end goal of a newborn field called connectomics and although we are a long way off, it’s looking like it might actually be possible. We now are at the point where precise and large scale reproduction of brain wiring is actually happening. If you look back through my posts you will see a number of projects that gathered enormous data sets to this end. For example, an open access database called FlyCircuit is a supercomputer-based model of the fruit fly brain based on pictures of over 16000 individual neurons and counting. Once the FlyCircuit group generated all this data, they had to come up with special software to analyze how those neurons are positioned and connected inside the brain. This enormous amount of data is freely available online for people to use in their own analysis. In another paper, Davi Bock and his colleagues took 3.2 million extremely high-mag images of a 0.008 cubic millimetre chunk of mouse brain that took up 36 terabytes of disk space (data also freely available). The challenge then was to analyze it, and while they arguably didn’t get very much info from their analysis, mining their data in different ways could likely produce a lot more info from this one tiny bit of brain. So in neuroscience, we are now getting to the point where data is in major surplus.

In fact this is the trend throughout the scientific world; genomics, climate science and astronomy are just a few of the disciplines that are being hit with the data deluge (cheesy, alliterative buzz name coined elsewwhere). The case has always been exactly the opposite, though. Until recently, the bottleneck of research was at the data acquisition end. In a recent special issue of Science on the data deluge, Richard Baraniuk explains that we have just moved out of an age where data is hard to acquire and into one where the challenge is what to do with our wealth of it. An indicator of this occurred in 2007, when for the first time the world produced more data than it could store. And now, on a yearly basis, we produce double the amount of data we can store. If you’ve read or heard tech-optimist Ray Kurzweil, you will know his Law of Accelerating Returns catch-stat that just about everything to do with data doubles every two years. As it turns out this is more or less the case for the world’s data storage capacity and data acquisition too, except that, as Baraniuk informs us, the world’s data storage capacity is growing slower (20% slower per year) than the amount of data produced, meaning that if the trend continues, our world will get ever more data rich, but only to have to throw away a good portion of that wealth.

Mining Neuroscience Data

In neuroscience, we aren’t at that point yet, and are probably a long way off, but I would argue that the field of connectomics in particular is going in the same direction.

In fact, in the special issue of Science that I mentioned before, Huda Akil and friends wrote a perspective piece on “Challenges and Opportunities in Mining Neuroscience Data.” They start by coining the phrase “neural choreography,” which they use to refer to the complex dance of neural communication and connectivity that makes up our brains and minds. In their article they stress that the brain can’t be understood from a purely reductionist approach and so we need to move from the very basics of neurobiology to the more complex. Akil and friends put forth that as we push the study neural choreography forward, new layers of function will emerge, and thus the overarching goal in the field is to go from studying neuronal genes and proteins, to neurons, to neural circuits and finally to thought and behavior. This isn’t really news; I would wager that most neuroscientists, regardless of how zeroed in they are on their favorite gene or cell-type, have at least entertained the idea that they are joining in a process that will hopefully end in a full understanding of the brain and mind. However, Akil and friends do give some insightful details on some of the efforts that are starting to look at neural circuitry on a huge scale, and the resulting amounts of data.

They start with the details of a new initiative; the Human Connectome Project (HCP). Obviously echoing the Human Genome Project, the HCP aims to comprehensively map the major connections between brain areas, as well as compile information about each of the regions. The HCP is projected to comprise a petabyte of data, or 1000 terabytes and all this data will be made available online for easy browsing and analysis. The HCP is currently using two main neuro-imaging methods to map out brain connections. Both of these methods use variations on magnetic resonance imaging (MRI) to scan brains of living human beings. The first method, called diffusion mapping, exploits the preferential movement of water molecules along neural tracts (ie nerves or axon tracts) to determine the orientations of neural fibres that connect brain areas to one another. The second technique is called Resting State Functional MRI. Functional MRI (fMRI) is essentially just video MRI that assesses which brain areas are more active than others during the scan. To do this, experimenters and physicians use a specialized MRI setup that tracks the levels of oxygen being used by each brain area. Generally, fMRI is performed on people doing some kind of mental task – thus determining which brain areas are involved in that task. However, as the name suggests, Resting State fMRI is based on resting fluctuations in the amount of oxygen used by brain areas – the subjects are not performing any mental tasks. During rest though, the brain is still active – thoughts flit through your head, muscles twitch, etc. So by watching for brain activity during rest, and keeping track of when certain brain areas become active, you can infer that, since area B became active right after area A did, perhaps they are connected.

It is important to note that these techniques do not get down to the cellular level – the resolution of fMRI is something on the order of millimetres cubed, a volume that can house hundreds of thousands of cells. So the HCP does not purport to generate a replica of the brain on the cellular level, but regardless, the data produced will be valuable in making predictions as to information flow in the human brain, and will also hopefully give us a look at how brain wiring differs between individuals.

While Akil and friends are clearly optimisitc about the study of the human connectome, they offer much less information on what they call “microconnectomes,” connectomes on the cellular level. In particular they cite the Brainbow mouse, which is a strain of mice genetically engineered to express fluorescent proteins in their neurons, resulting in a brain that has neurons that fluoresce in many different colors. The idea behind these mice is that the many different-colored neurons should make it easy to tell neurons apart and thus simplify studying the circuitry. However, despite producing some beautiful and award winning images, the brainbow mouse has yet to really deliver on the connectomics scene. So while Akil and friend mention that yes, the effort to map circuitry on the cellular level is underway, they don’t give much insight. Personally I think that Akil and friends have their priorities a bit backward, but I’m an unabashed cellular junkie so don’t trust me. However, I will offer that the basis of neural computation is the passing around of tiny electical impulses between neurons that are arranged into extremely complex circuits. And so, to truly understand the brain, we need to understand it on a cellular level. Despite this oversight by the authors though, it has been a good year for connectomics, and if you would like to hear about some of the advances you should have a look at some of my older posts that I mentioned at the beginning of this piece.

Although their article might leave you with a skewed impression that the HCP is the be all and end all in the field, Akil and friends do a good job of hitting home the scale of these projects and the need for compiling and sharing all this data to optimize its analysis. In particular they focus on the Neuroscience Information Framework, an online database of databases that provides open access to all types of neuroscience data. Akil and friends note that although this is a formidable organization, it has its drawbacks. For instance, the most notable roadblock to accessing the NIF’s wealth of data is the project-to-project and database-to-database disparity in terminology. To deal with this the NIF created Neurolex, a standardized naming system that allows the NIF’s users to effectively search its holdings. The challenge now is getting people to use this system when submitting data. Akil and friends argue that neuroinformatics approaches such as this allow for navigation and integration of multiple tiers of huge amounts of data, which will in turn facilitate the untangling of neuroscientific questions that are becoming more and more complex. They leave off with 8 recommendations which essentially boil down to:
1.) Share your data,
2.) But make sure it is standardized.
3.) And get used to the fact that machines and software that most of us don’t understand are here to stay and are the way of the future.

One major issue on the road to this type of collaborative neuroinformatics approach is the cultural shift that the scientific community will need to undergo; scientists aren’t necessarily all that willing to share their data fully and irrevocably – what if someone else makes the discovery from their data before they do? The point, however, is not to open up your data on its own, the point is to amass enough data to be able to synthesize something new. Along that road there are certainly going to be hardships – we likely will need “data stewards” to manage the databases, we’ll need new software, we’ll need storage space, and new ways of understanding huge amounts of data – but by creating more data and amassing it we gain a new level of depth and descriptive ability that will undoubtedly bare fruit.

Where Does All This Data and the Coming Flood Leave Us?

In a 2008 article, Chris Anderson, editor of Wired magazine, put forth the idea that the scientific method as we know it is becoming obsolete. The work flow of “hypothesize – model – test” is soon to be antiquated given the massive amount of data now available to us. He takes up the example set by Google and its ability to track activity on the internet. Google has access to information on how people behave – specifically, which websites we go to and all the info pertaining to our cyber travels. Anderson asks, who cares why people do what they do on the internet? “The point is that they do it, and we can track and measure it with unprecedented fidelity.” His point is that we can make as many hypotheses about people’s internet behavior as we want, but it doesn’t matter – the data is there just waiting to be mined – “With enough data, the numbers speak for themselves.”

At one point I think he hits the central point without really realizing it, though. He criticizes the fact that scientists are taught to be wary of correlation, learning early on that correlation is not causation. He follows that with:

What he misses though, is that when we get to this level we are no longer dealing with correlation – we are dealing with big enough numbers that the data is fully descriptive, truly representative, exhaustive. By collecting all data from the internet, you are tracking the activity of every person using it, and so you have no need for models because you have the real thing recorded. The tale that Google is telling with the internet is one of description, with no need for hypotheses or trust in correlation. They have moved past the point of needing to figure out how to collect the data. In their field, they have the data – all of it. The hard part now is figuring out ways to interpret it.

What does this say about neuroscience though? Not much. But we can look forward to the days when our acquisition techniques produce fully descriptive data – when we can connecto-type individual brains and enter a new (and probably very scary) age of brain science. At that point we will likely have to do the same as Google and take a back seat to connectome-trawling algorithms that will describe the brain replicas that we feed into the supercomputers needed to stored them. This amounts to true naive observation, description without preconceived notions (if, of course, we can design the algorithms to be objective). Naive observation, followed by informed induction of generalizations from the data is, in my opinion, probably the best way to conduct science. If you can pick through the data without any bias and pull out the rules, laws, facts as you come across them, there is no need for making assumptions and hypotheses going into the experiment. However, we aren’t there yet, so in the meantime, we’ll have to continue wading through our ankle deep data. Welcome the flood when it truly hits though, because it will transform the way we understand the brain and ourselves.

As I mentioned in my summary of the meat and potatoes of Perez-Cruz and friends’ paper, this group largely reproduced previous findings, but they also added a new dimension of depth to those findings. Another attractive aspect of the paper was its simplicity, and although this was partially its downfall, it is nice to see important findings made elegantly and without extremely complicated procedures and analysis.

There were two main instances in which I was impressed by the the simplicity of the paper. The first is that it used very basic techniques to describe a very basic effect of the two mutations on spine density. Perez-Cruz and friends used a technique for staining neurons that was discovered by Camillo Golgi in 1873. Although this technique has been modified over the ensuing century and a half, and is now the Golgi-Cox method, the gist remains – it is a staining protocol that allows for strong labeling of between 1 and 10% of neurons in a histological slice. Cellular neuroscience started with Golgi staining almost 150 years ago, and since I am nostalgic, it’s nice to see that it is still useful. It is also nice to see that it was useful in identifying such a pronounced and important, but again, basic effect; namely, the loss of spines. Perez-Cruz and friends used Golgi staining to assess spine loss on pyramidal neurons in area CA1 of the hippocampus. Rather than simply looking at one area of the dendritic tree of these cells though, they assess 4 different dendritic subregions: subregions of basal proximal and distal to the soma, and subregions apical dendrites proximal and distal to the soma. The importance of considering these different subregions came out in the results, as they found that both mutants showed spine loss specifically on proximal basal dendrites (35-45% loss), and around 30% spine loss on proximal apical dendrites only in the London mutant. While many people, including myself, tend to overlook the differences between dendritic subregions, it is clear that different dendritic subregions in the hippocampal receive inputs from separate brain areas and neuronal subpopulations, so kudos to this group going the extra mile.

The second simplistic point comes when the group links the spine loss phenotype to memory deficits, albeit in an unavoidably correlative way. Perez-Cruz and friends find that the Swedish mutant has a deficit in contextual fear condition, while the London mutant has a deficit in the Morris Water Maze. Wild type mice froze much more often in the fear conditioned context than the Swedish mutants, indicating that the mutants have a deficit for contextual fear conditioning. On the other hand, the London mutants didn’t have a contextual fear deficit. They did, however, have a deficit in the Morris Water Maze test, exhibiting a significantly longer latency to find a submerged platform in a small pool of water after 9 days of training. These data indicate a memory deficit as a result of the mutations. One issue with these experiments is that although each mutant strain showed a deficit in one task, they didn’t have a deficit in the other. Perez-Cruz and friends chalk this up to the fact that the mutants are in different genetic backgrounds, so it would have been nice to see the tests done in the same backgrounds – although I appreciate the time it takes to back-cross a mutation into an alternative background. However, I also have a soft spot for the mutation- and strain-specific discrepancies of these memory tests because it emphasizes the complexity of memory to the point where people just have to be truthful and say we don’t understand why this is the case. Honesty in science is sometimes a triumph in itself. Regardless though, both mutants show some type of memory deficit associated with the spine loss.

After examining these basic phenotypes, Perez-Cruz and friends dig a bit deeper. It is well known that over-excitation in neural circuits is one cause of spine loss. For instance, decreased spine density is a well characterized consequence of epileptic seizures. So, given the spine loss that they see, this group wanted to look at activity levels in their mutant mice. Rather than assessing activity directly though, they look at the expression levels of a protein called Arc (activity-regulate cytoskeletal-associated protein) , which as you can tell from the name, is regulated by activity levels. Specifically, the higher the activity, the more Arc that is expressed. Indeed, Perez-Cruz and friends found that both the Swedish and London mutants expressed higher levels of Arc in their hippocampi, regardless of whether environmental enrichment and memory testing done on them. This indicates a higher level of activity in the mutant hippocampi, and likely overly high activity since the Arc increase is concomitant with spine loss. One problem with their assessment of Arc expression though, was that they only performed low magnification analysis of Arc immunostaining, quantifying their DAB staining over an entire section of CA1. This is not the most accurate staining analysis, and it would have been more interesting to see the subcellular localization of Arc, since it is located at individual synapses.

However, Perez-Cruz and friends go on to look for an explanation of the over-activity that likely led to the spine loss and Arc upregulation. They posit that since inhibitory interneurons are known to limit the level of excitation in the hippocampus, if some of them died, you would expect an increase in activity levels. Indeed, some groups have shown that by pharmacologicaly inhibiting these interneurons, you can induce a robust decrease in spine density. So, the group assessed the number of inhibitory interneurons in the hippocampi of their Swedish mutants (oddly they didn’t assess the London mutants) and found that compared to controls, the mutants had fewer interneurons in the area surrounding the basal dendrites of the CA1 pyramidal cells (recall, this is where the group saw the majority of the spine loss).

So in the end Perez-Cruz and friends present a pretty complete story. They first show that both the Swedish and London mutant models of Alzheimer’s have spine loss in specific dendritic subregions of CA1 pyramidal cells. They link this spine loss to memory deficits. They then find that the spine loss is likely correlated with increased activity levels in the hippocampus, and finally indicate that this increased activity level may be due to a loss of inhibitory interneurons. Based on this last point, the group suggests that by protecting this interneuron population we may be able to mitigate the spine loss that likely gives rise to cognitive deficits associated with Alzheimer’s, but I must admit I can’t figure out exactly how we might do that, particularly since we don’t seem to know why there are fewer of these interneurons in the mutant mice. Discovering more targets for potential therapeutics certainly isn’t a bad thing though.

Although the experiments and results were fairly simplistic, Perez-Cruz and friends’ discussion of their findings is decidedly more complex. The discussion goes in depth into how these findings might possibly fit into the expansive field of Alzheimer’s research, so if you are interested I would highly recommend checking out the paper itself.

What you see in the header image of The Naive Observer is part of a dendrite. Dendrites are the information collecting arms of neurons. Neurons also put out another long, snaking and highly branched arm called an axon, which carries info by conducting tiny electrical pulses away from its cell body. Axons transfer those pulses to dendrites at small connections called synapses. On the dendrite in the header, each little protrusion is the dendritic counterpart of a synapse, called a spine. This is where messages are received. In this picture, a fluorescent protein makes only the dendrite glow, so you have to imagine all the axons contacting each spine. They are there, you just can’t see them.

Synapses are where the action happens in the brain. Our brains (and by extension, we) are so complex and able to do so much because of the info that passes between neurons at synaptic connections. When it comes to it, we are our synapses and the electrochemical signals that pass through them. So each little spine on the header dendrite could be a part of a memory, an ability, something totally subconscious that we would never be aware of or any of myriad other brain functions. Regardless of what a specific spine does though, this is what part of the experience of a living being looks like in the flesh.

Knowing this, it should come as no surprise that losing spines results in major changes to a person’s mental make up. This is exactly what happens in Alzheimer’s disease; along with a host of other structural changes in the brain, dendritic spines disappear, and with them likely go parts of the person suffering the disease. However, it is hard to study the loss of spines over time in human’s with Alzheimer’s, so efforts have been mounted to produce mice that are genetically engineered to develop Alzheimer’s. To understand these mouse models, you should know 3 main cellular alterations that cause and accompany the disease:

1.) A protein called Amyloid β accumulates in plaques and “fibrillary tangles” around neurons and cerebral blood vessels.
2.) Certain neuronal subpopulations die off.
3.) Many synapses disappear at an early clinical stage of the disease. Since synapses are thought to be the cellular basis of memory, loss of synapses (and thus spines) is expected given the well known memory deficits involved in Alzheimer’s.

Mouse models of Alzheimer’s are based on the first point. To generate these models, people have genetically engineered mice with mutated amyloid β (Aβ) genes that result in the protein eventually accumulating, aggregating, and forming plaques. It was originally assumed that the Aβ plaques were the major problem in Alzheimer’s although no one is sure whether these plaques are actually the pathological agent, just a result of something else that is really causing the disease, or if the formation of the formation of these plaques are actually a cellular response to try and counter the disease. However, while we still aren’t sure about the plaques themselves, it’s becoming increasingly clear that soluble Aβ particles that exist before plaques form are also extremely detrimental. In fact this became clear when people started studying the mouse models that had the mutant Aβ: these mice started presenting cognitive deficits (like poor performance on memory tasks) before the plaques even formed.

In a paper published this March, Perez-Cruz and friends took 2 of these mouse models and examined the cellular architecture of neurons in a part of the brain called the hippocampus – famous for its role in learning and memory. Both of these types of mutant mice have mutations that lead to increased expression of Aβ protein, leading to Alzheimer’s-like symptoms in the mice. These mutations were actually first found in human families affected by congenital, early-onset Alzheimer’s. One of these families is Swedish, and the other is from London, so the mutations were aptly named the Swedish and London mutations.

Perez-Cruz and friends used these mice to essentially reproduce previous findings, but did it in such a way as to synthesize some new information. They showed that both of these mutations lead to decreased spine density, and that this is presumably the cause of the memory deficits that the mutant mice present. However, by assessing the levels of neural activity in the the hippocampi of these mice, they provide evidence that the spine loss is a result of unhealthily high levels of activity. They go a step further and show that these high activity levels may be a due to a decreased number of inhibitory neurons that normally help to keep activity levels at more acceptable levels. Thus, in their conclusion, Perez-Cruz and friends offer that perhaps by finding a way to keep these inhibitory interneurons healthy we could help prevent the spine loss associated with memory deficits in Alzheimer’s.

How to target specific populations of inhibitory neurons healthy is a big task though – most importantly we have to figure out what is causing them to be sick? The plaques or something completely different? While that is hopefully being pursued, Alzheimer’s is a huge field of study right now, with lots of work going into understanding the disease pathology and producing effective treatments. However, with Alzheimer’s rates rising in an aging population, help can’t come soon enough, so we’ll have to keep our finger’s crossed for now.

If you’d like a more positive conclusion, you can have a look at this – I’ve heard a couple of talks in the past couple months that tell me art therapy is extremely useful in alleviating the isolation of a deteriorating brain.

After reading about the meat and potatoes of Chen and friends’ paper, you may or may not agree that Layer 2/3 “dynamic zone” interneurons seem to control ocular dominance plasticity. Below, I have dissected the paper in more depth to try to get at this issue, but to get into the nitty gritty details I need to start by giving some more background.

Layer 2/3 interneurons have recently been shown to undergo a strong ocular dominance shift following monocular deprivation. This is paralleled in general in visual cortex neurons, where following monocular deprivation, responses of binocular neurons to stimulation in the deprived eye get weaker, while responses to stimulation in the non-deprived eye strengthen. It was recently found that this effect is actually much stronger in Layer 2/3 interneurons due to a larger desensitization to deprived-eye input to these cells. Although they don’t mention it very clearly, Chen and friends predict that this effect is due to the Layer 2/3 dendritic branch plasticity that they identified and characterized in the 2 previouspapers I mentioned in the meat and potatoes. Specifically, the rational is that following monocular deprivation, the deprived eye will have less physiological input into binocular V1, resulting in loss of synapses from carrying input from the deprived eye, particularly from Layer 2/3 interneurons (judging by the fact that their electrical properties are affected more strongly by monocular deprivation.) The idea is that the dendritic branch tip retraction seen by this group has something to do with physical loss of synapses (which they actually show later on). However, which comes first is something they don’t guess at – does the monocular deprivation cause retraction of axons resulting in dendritic tip retraction, or does the interneuron sense a decrease in input activity and retract its branch tips? This is the question of who is sensing activity levels, and will be an interesting question to answer.

So, the overarching idea is that these Layer 2/3 interneurons are the central force driving adult visual cortex plasticity. Again, as I mention earlier, this notion began with the Nedivi lab’s identification of a superficial “dynamic zone” of layer 2/3 that contains these interneurons with highly dynamic dendritic trees. The idea is further bolstered by the older findings that cells in cortical layers 1-3 (extragranular layers) have a greater potential for adult plasticity than cells in other layers. So it seems like these Layer 2/3 interneurons, particularly those in the “dynamic zone”, are master controllers of plasticity in the visual cortex.

The first step in testing this theory was to see what happened to binocular V1 interneuron branch tip dynamics following monocular deprivation. (They identified binocular vs monocular V1 using Optical Intrinsic Signal Imaging.) Prior to monocular deprivation, around 3% of branch tips either elongated or shortened. In contrast, following monocular deprivation (always in the contralateral eye) branch tip dynamics increased significantly to 9% at 4 days post-deprivation, and 8.5% 7 days post-deprivation. However, overall branch length was conserved, indicating that the proportions of tips that elongated and retracted were the same in the normal and deprived conditions. When Chen and friends assessed branch tip dynamics at 14 days post deprivation, they were below baseline levels. Amazingly, this was exactly what Chen and friends had predicted, since it was already known from physiological work that it takes about 7 days for ocular dominance shifts to occur following sensory deprivation. To verify that branch tip dynamics actually involve changes in synapse number, Chen and friends verified by electron microscopy that a newly elongated branch tip contained synapses, and found this to be the case. They only did this for one tip though, so all this tells us is that branch tips can support synapses. After seeing the huge electron microscopic reconstructions in thesepapers that I reviewed, I am very disappointed that Chen and friends was convinced by a single branch tip. Aside from providing real evidence for their claim, assessing synapse size and density on these branch tips could be very interesting. Are these new synapses stronger or more numerous to allow their effects to be felt better?

Chen and friends also saw that monocular deprivation led to decreased monocular V1 tip dynamics at 4 days post-deprivation, and a return to baseline at 7 days post. They brush this off as merely falling in line with the idea that the plasticity is occurring in conjunction with ocular dominance shift in the binocular cortex, but it occurs to me that this decrease in dynamics reflects a stabilization of branch dynamics. Why might this stabilization occur?

Chen and friends then got more specific. They note that evidence compiled from the primate visual cortex over the past number of years shows that Layer 2/3 is a location where bottom-up, feed forward input from layer IV neurons (which receive sensory input from the thalamus) converges with top-down feedback input that is thought to modulate attention and stimulus salience. Thus, it turns out that the L ayer 2/3 interneurons are perfectly positioned to receive feed forward input from below, and feedback input from above. So Chen and friends tested whether there is any difference in plasticity of interneurons that are positioned higher in the cortex versus those located deeper. What they found fits really well with the idea of monocular deprivation inducing bottom up remodeling of Layer 2/3 interneurons and shifting of ocular dominance. Under control conditions, dendrites located deeper in the Layer 2/ 3 account for ~37% of remodeling (both elongation and retraction) while dendrites that project into Layer 1 account for ~63% of remodelling. However, following 4 days of monocular deprivation, Chen and friends find that dendrites located deeper in the cortex account for around 90% of the retractions, while the distribution of elongated dendrites is similar to control. Then, between 4 and 7 days post-deprivation, dendrites in the lower Layer 2/ 3 account for about 63% of elongations (increasing from 37% in control and 0-4 days post deprivation) and the distribution of retractions return to control levels. I find this truly amazing. Basically what Chen and friends have shown here is that directly following deprivation, you get a net retraction of dendrites on these interneurons, presumably resulting in a loss of synapses. THEN, a few days later, you get a regrowth of dendritic branches, presumably resulting in an increase in synapses. Furthermore, this remodeling is occurring in cortical lamina where sensory input is arriving. When you throw this together, and put some faith in the correlational nature of this study, it looks like we are seeing the functional rewiring of the visual circuit that accounts for an optical dominance shift seen following monocular deprivation; in other words, we have presumably witnessed an initial loss of contralateral input, followed by formation of ipsilateral input (remember we are looking at binocular V1).

Of course, the big question after seeing all these dendritic dynamics is what is actually going on with the connectivity; what’s happening to the synapses? We saw earlier that Chen and friends made a lame duck attempt at proving their newly elongated branch tips contain synapses, and we have to assume their sample size of one is accurate here, but to change the circuitry you might need rewiring at higher levels than just the Layer 2/3 interneurons. The group started by looking at boutons on Layer 2/3 interneuron axons. They see that under normal conditions, about 2.5% of boutons are eliminated and a similar proportion of new ones form. However, under monocular deprivation, bouton elimination drops to 0.7% and formation increases to a whopping 7.5%. This is a solid finding, and probably reflects the real synapse dynamics, but boutons aren’t the best readout of synapses, since they are identified by eye, and telling whether or not a bouton is a synapse or just a bump isn’t exactly a science. The best way to look at synanpses is to look for actual synaptic machinery immunohistochemically. Thankfully, Chen and friends also stained for VGAT, the transporter that loads GABAergic vesicles with GABA, and which thus labels inhibitory synapses. Using this approach they assessed inhibitory synapse number onto Layer 2/3 and Layer 5 pyramidal neurons, both of which receive input from Layer 2/3 interneurons. While they found no change in inhibitory synapses onto Layer 2/3 pyramidals, they saw a 40% increase in inhibtory synapse density on Layer 5 pyramidal neuron apical dendrites. Whether or not these new synapses are strictly from Layer 2/3 interneurons isn’t clear from this data, but given the increase in axon bouton density, it seems likely that this is the case. One piece of information that the group failed to provide was where the axonal boutons were located – if there was an increase in their numbers in Layer 5, I would be more convinced by this section of the paper. However, it looks likely that synapse number is indeed increasing.

Now for the sensationalist part. Chen and friends hit on a previous paper indicating that fluoxetine (read Prozac) “restores a juvenile level of ocular dominance plasticity in the adult […] rodent,” due to its inhibition of restrictive, intrahemispheric GABAergic transmission. In agreement with this finding, Chen and friends saw that fluoxetine almost doubles Layer 2/3 interneuron branch dynamics when compared to control. This on its own is pretty cool – a structural correlate of antidepressant treatment. But they took it a step further. You will recall from earlier that monocular deprivation results in an initial decreased number of elongations, followed by an increase. When Chen and friends combined fluoxetine treatment with monocular deprivation they did not encounter this early decrease in elongations! Without looking at synapse turnover rates, it is hard to tell what this means, because it could either be that the same old synapses are staying around, and there is increased elongation OR that the synapses are disappearing as with normal monocular deprivation but the plasticity that will eventually restore synapses is initiated earlier, potentiall leading to a quicker ocular dominance shift.

Being optimistic and slightly irresponsible, it is encouraging to think that the latter is the case because this could mean that combining fluoxetine treatment with some kind of instructive stimulus – such as cognitive behavioural therapy or talk therapy – might lead to quicker and potentially more effective rewiring of brain circuitry, improving recovery from depression, or perhaps even making it possible in intractable cases. I’m sure this is already going on to some extent, or at least I hope so, but it is comforting to see that there is at least a far fetched and whimsical cellular argument for its success.

With that, we are left with a major cliff hanger. Hopefully you have as many follow up experiments flitting around your head as I do. To begin with, Chen and friends finish their paper with the evidence that fluoxetine increases Layer 2/3 interneuron dendrite elongation immediately following ( ie 4 days post-) monocular deprivation. But in the rest of the paper they also looked at branch tip dynamics at 7 days post-deprivation – what happens after 4 days when normal monocular deprivation starts to cause increased elongation? Are there new synapses formed already? Do the elongation dynamics increase even further? Then, does fluoxetine paired with deprivation cause more inhibitory synapses to form than just deprivation on its own? Does it increase the extent of the ocular dominance shift? Is there a detrimental effect? If you open the eye again following short deprivation does the fluoxetine condition allow for better recovery? There are many questions to ask, but I will leave these few with you and open it up for discussion below.

For a long time, neuroscientific dogma dictated that the brain is hardwired. This was cemented by the venerable father of cellular neuroscience, Ramon y Cajal, when he said: “Nerve paths are something fixed, ended, immutable. Everything may die; nothing may be regenerated.” That means no repair, and no rearrangement. (Although aparrently his opinions on this were somewhat ambiguous.) However, its pretty apparent that our minds change over time – every day, likely every second – so if our brain is the seat of the mind, shouldn’t our brain also change? It was well known that neurons in the peripheral nervous system could regenerate, but the dogma of the immutable brain held for the central nervous system. Then, in the 1980’s, Albert Aguayo’s lab at McGill discovered that, when given the appropriate environment, neurons in the brain retain the ability to regenerate and in so doing they opened a whole new field of study. We have yet to see major benefits of the study of central neural regeneration, but then again we have yet to understand the related natural processes that actually keep it from happening. While things are steadily chugging along in the study of regeneration, there is a whole other field looking at the normal, day to day changes in our brain. The ability of our brains to change is referred to as neural plasticity. (Think malleable plastic.) In fact, the brain can change on a grand scale. For instance, when you lose a finger, the area of the brain that used to control that finger can be recruited to help control the remaining fingers with better accuracy. Although if you are a struggling instrumentalist I wouldn’t recommend cutting off your pinky. (Unless you play with your feet – my father convinced me when I was young and impressionable that the pinky toe is thoroughly useless and would be lost to evolution in a matter of generations.) While large swathes of neurons can change their function, neural plasticity is also thought to be the basis of memory formation. But regardless of the result of neural plasticity on the level of behaviour and the mind, it involves physical change on the level of neuronal networks, cells and synapses. Synaptic plasticity in particular has been studied in extreme depth, albeit with a lot left to learn. At its most basic level, synaptic plasticity can involve appearance of new synapses, disappearance of preexisting synapses, and strengthening or weakening of synapses. The basic idea is that memory formation involves strengthening and/or formation of new synaspes, while memory loss involves the opposite (or just straight up cell death), but it is almost certainly more complex than that.

So, although Ramon might roll in his grave, I will say very firmly that the brain is not hard wired. Change is possible.

To this effect, a study just published in Nature Neuroscience by Chen and friends looked at the plasticity of inhibitory neurons in the visual cortex, one of the first brain regions where plasticity was studied. David Hubel and Torsten Wiesel, who I’ve written about briefly in the past (point #3) discovered a lot about the mammalian visual system, and laid the groundwork for a lot subsequent experimenting done on visual plasticity.

Their first major discovery was their identification of Orientation Selectivity of neurons in the visual cortex. Orientation selectivity refers to the ability of a single cell to respond specifically to a bar of a certain orientation presented in a specific location in the visual field. To be clear, orientation selective neurons are located in the visual cortex at the back of the brain, and become active when the eye see’s a bar of particular orientation (these neurons receive input from the neurons in the eye). You can read more about visual stimulus selectivity and how it arises in a couple of my recent posts.

Their next major discovery was that neurons in the visual cortex are arranged in columns that run perpendicularly into the brain, starting at the outer surface and extending up to ½ a centimetre inward. The main thing to know about these columns is that each one mainly receives input from either the right or left eye. Thus the columns are referred to as Ocular Dominance Columns, since either the left or right eye dominates the input into each column.

Hubel and Weisel’s third major discovery is where they broke into the world of neural plasticity. In particular they identified experience dependent plasticity of ocular dominance columns. They found that when they raised kittens with one eye stitched shut, a procedure termed monocular deprivation, there was an ocular dominance shift that resulted in the non-deprived eye being more heavily represented in the visual cortex, with columns receiving input from the non-deprived eye that were physcialy much bigger and contained more cells than the columns receiving input from the deprived eye. So the overarching idea is that, if you alter the experience of developing circuitry (ie by closing one eye), you change how that circuitry develops. Another major finding by Hubel and Wiesel was that there is a critical period of time in which this experience dependent plasticity can occur. Since that point critical periods have been identified for lots of other developing systems, the most notable being language acquisition; after a certain age it is much more difficult to a acquire a new language.

Their discovery of a critical period for experience dependent plasticity indicated that the adult brain is less plastic than the developing juvenile brain and that you don’t expect to see ocular dominance shifts when an adult undergoes monocular deprivation. However! Hubel and Wiesel were working in the cat, and it turns out that rodents seem to retain a unique propensity for experience dependent plasticity into adulthood, showing ocular dominance shifts following monocular deprivation at mature ages. (If you are skeptical, here is a good paper.) This makes the rodent visual system an ideal model for experience dependent plasticity of cortical circuitry in adults.

So, making use of this unique feature of the adult mouse visual system, the Nedivi lab (home of Chen and friends) has been looking at plasticity of inhibitory interneurons in the visual cortex since the early 2000’s. To remind you, dendrites are the information gathering arms of neurons, onto which synapses are made. So, you can imagine that changing the structure of dendrites, making them shorter or longer, thus taking away or adding synases, could be a form of plasticity. It turns out that the folks in the Nedivi lab are pretty sure they found the first unambiguous evidence of dendritic remodelling involved in neural plasticity. In a 2006 paper they found that the tips of dendritic branches of inhibitory interneurons in the mouse visual cortex elongated and retracted “on a day to day basis.” Then, in their next paper they showed that there is a specific “dynamic zone” (corresponding to a superficial layer of the visual cortex) where interneuron dendritic branch tips are more plastic than those of other adult neurons. In their most recent paper, which I review in depth here, Chen and friends show that the ambient levels of plasticity they see in these neurons is tripled following monocular deprivation similar to that done by Hubel and Wiesel in the ’70’s. The message they are sending here is that this population of superficial inhibitory neurons that retains a high level of plasticity into adulthood is a kind of master mediator of cortical neuroplasticity. It would be extremely interesting to see if the same things occur in other areas of the cortex known to be plastic into adulthood – for instance in the finger region of the motor cortex following loss of a finger. This body of work could be laying new groundwork for studying structural plasticity in other brain regions, which I find really exciting.

Something even more tantalizing, though, is the tidbit that Chen and friends leave off with. Fluoxetine, better known as the extremely popular antidepressant, Prozac, restores the visual cortex to a “juvenile” level of plasticity. So Chen and friends tested it on their mice to see how it affected the dynamics of the dendrites they were looking at. They found that when they combined the fluoxetine with monocular deprivation, the structural plasticity of the dynamic zone inhibitory neurons is even greater! From this they make the tentative conclusion that combining fluoxetine treatment with some kind of “instructive stimulus” may enhance experience dependent plasticity of circuitry, allowing for better recovery from depression, obsessive compulsive disorder, and other mental illness that may require some kind of neural rewiring. To me this evidence is extremely promising since that instructive stimulus could be something like cognitive behavioral- or talk therapy. Combining fluoxetine with something like that won’t require long-winded clinical trials; people can just start doing it, although I would imagine this has begun already.

Although I’ve said it about many other papers, I can’t help myself here: All told, unless you want to deprive yourself, this is one more group to keep an eye on.

Apparently Richard Axel, 2004 Nobelist in Medicine or Physiology, “espouses the view […] that ALL human behavior is learned.” Glad to know it. You can read about Jen Leslie’s lunch time conversation with Dr. Axel on her blog, Scientific (mis)Communication.

As I said in my description of the meat and potatoes of Nouvian and friends’ paper, they showed that the inner hair cell’s (IHC) ribbon synapses don’t play by rules.

In truth it was established quite early on that these synapses are rather odd. To begin with, inner hair cells are not technically neurons, they are epithelial cells that transduce mechanical energy in the cochlea to electrochemical energy in the nervous system. They also don’t fire action potentials, but are electrotonically depolarized by mechanical stimulation of the organ of Corti in the cochlea as sound waves enter the inner ear. The mechanical stimulation causes shearing forces on the hairs atop the IHCs, resulting in mechanical gating of cation channels. The ensuing depolarization opens voltage gated calcium channels, resulting in synaptic vesicule fusion. To be clear, this is synaptic transmission by something technically defined as an epithelial cell. Furthermore, these cells have ribbon synapses, meaning they have a long electron dense synapse, that is very crowded with vesicles, presumably to increase auditory sensitivity. Finally, they are strange on the molecular level as well, since on the one hand they lack synaptotagmins and complexins that are generally important for normal synaptic release while on the other, to function properly, they require molecules that aren’t very common (otoferlin and RIBEYE).

Now though, Nouvian and friends have added another peculiarity to the list. As I mentioned in my description of the meat and potatoes of the paper, when the group applied individual botulinum toxins (BoNTs) to specifically cleave each type of SNARE (synaptobrevins, SNAP-25 and syntaxin) vesicular fusion continued as usual. It is important to note that Nouvian and friends couldn’t study vesicular fusion by measuring postsynaptic responses; to do their recordings they had to remove the IHCs from the cochlea but unfortunately, the only postsynaptic targets of the IHCs have their cell bodies in the superior olivary complex in the brain stem, and were thus left behind when cochlea was removed. Instead, Nouvian and friends measured exocytic membrane capacitance increases that occur when a vesicle fuses and adds its membrane to the presynaptic terminal. To do this they did whole cell patch clamp recordings, which allowed them to simultaneously depolarize the cell enough to cause vesicular fusion and measure current changes. They could also apply the toxins directly into the cell through the patch pipette. Thus they were able to assess whether vesicles were fusing with the membrane without recording postsynaptic responses, although they would presumably occur.

Likely realizing that claiming synaptic transmission without the canonical vesicular fusion machinery is not an easy sell (although not terribly outlandish either) Nouvian and friends tackled this question very thoroughly and from numerous angles.

The Many Angles

To start, they verified that their toxins worked with two different methods. First they applied each toxin to their complementary SNAREs (BoNT/D cleaves synaptobrevin; BoNT/E cleaves SNAP25; and BoNT/C cleaves syntaxin) and verified by western blot that the toxins actually cleaved them. Second, the group examined the effect of their toxins on vesicular fusion in chromaffin cells. Chromaffin cells are neurotransmitter-releasing cells in the autonomic system. They are predominantly found in the adrenal cortex where they release epinenphrin and norepinephrine from dense core vesicles into the blood stream, instead of across a synapse. I believe the reason Nouvian and friends used these cells to examine vesicular fusion is because the dense core vesicle that they use are quite large, and thus it is easier to measure the incremental capacitance increase caused by addition of that large chunk of vesicular membrane to the release site. When Nouvian and friends “poisoned” these cells by applying the BoNTs through a patch pipette, they stopped seeing any change in membrane capacitance following applied current, indicating that vesicular fusion had been shut down by the toxins, as expected. However, as I said before, when they applied the same test to the IHCs, vesicular fusion carried on as usual, with the expected depolarization-induced incremental increases in membrane capacitance, indicating ongoing vesicular fusion.

Since the tetanus toxin insensitivity in IHCs was perhaps unexpected, Nouvian and friends, whether of their own accord or after prompting from referees, undertook a lot of control experiments and alternative tests to verify the validity of their initial finding. To verify that a homeostatic increase in calcium influx did not account for the maintenance of fusion, they verified that capacitance increase with respect calcium influx was the same in poisoned vs control and found that and found that this was the case. They then verified that the the toxins were actually making it into the cells by loading the cells with a fluorescently labelled BoNT/E (conjugated with Alexa 488) and imaging. They found that loading was reliable, and that even though this conjugated toxin still cleaved SNAP-25, it had no effect on capacitance increases. Then, taking things up a notch, Nouvian and friends opted to allow the BoNT/E more time to complete cleavage, and then use stronger current injection and photolytic calcium uncaging to induce vesicular fusion. Once again – the toxin had not effect. They repeated barrage of tests with the other two toxins as well, so up to this points they have their bases covered.

Thankfully for Nouvian and friends there are knock out mice for synaptobrevin-1, -2/3 and SNAP-25 that allowed them to assess the total loss of function of these SNAREs. Synaptobrevin-1 KO mice are the only ones that survive after birth, and they showed robust excocytic incremental capacitance increases in their IHCs. On the other hand, synaptobrevin-2/3 and SNAP-25 KO’s do not survive past birth. So to get around this, Nouvian and friends made organotypic cultures of the organ of Corti from embryonic mice. Once again, in these organotypics, capacitance increases occurred in response to depolarization in the IHCs, indicating in tact vesicular fusion.

Convinced yet? Too bad. At this point Nouvian and friends actually went looking for the physical presence of SNAREs and their transcripts. They did find transcripts for each of the SNAREs with real time PCR, although they amplified at lower levels than genes known to be expressed in IHCs (otoferlin and parvalbumin). Then they knocked pHluorin-tagged synaptobrevin-1 (aka Synapto-pHluorin) into the endogenous synaptobrevin-1 locus. pHluorin is a pH sensitive variant of GFP that fluoresces when exposed to neutral pH. When on a vesicle, the pH sensing domain of synapto-pHluorin is oriented inside the vesicle, where pH is low. Then, when the vesicle fuses to the membrane, the pHluorin domain orients into the neutral extracellular space. Thus, synapto-pHluorin is a reporter for active vesicle fusion. In these mice, neither depolarization of hair cells nor application of an acidic solution revealed any fluorescence. They couldn’t detect GFP in the knock-in haircells by immunohistochemistry either, but they could see it in neuronal terminals that provide feedback onto the hair cells from the superior olivary complex, providing a good postive control.

Finally they went looking for the actual proteins immunohistochemically. Using a total of 13 different antibodies, Nouvian and friends failed to identify any of the 4 neuronal SNAREs that may have been in hair cells, effectively poopooing a previous claim that SNAREs are present. Meanwhile, the terminals of that feedback pathway I mentioned above provided positive controls for all these antibodies.

In their concluding remarks, Nouvian and friends offer the unlikely explanation that perhaps all those 13 epitopes and the botulinum toxin cleavage sites are uniquely shielded in IHCs by interacting proteins, and that perhaps secondary SNAREs come to the rescue and compensate for the knocked-out SNAREs in their transgenic mice. But I think they have presented some pretty convincing work. They end the article by saying, “The most likley explanation of our data is that IHCs, being epithelial cells, make use of other SNARE proteins for synaptic vesicle exocytosis than neurons, which remain to be discovered.” In other words, we should have been expecting this outcome for years. I felt like I’d been duped when I read that, but after pondering it a bit I realized that it isn’t really so cut and dry. Yes, hair cells are technically classified as epithelials, but they are epithelial cells that make chemical synapses, making them pretty unique. So don’t feel too bad if you didn’t know the outcome of this paper without even reading it.

While it is a bit exhausting to be pummelled with so many controls and back up experiments, and while I feel for the experimenters who pushed through what were probably a lot of suggestions from reviewers, its nice to see such a lucid, straightforward paper that covers all its bases. As always, the Brief Communication format didn’t do justice to the amount of work that went into their pretty strong conclusion, but I will assume they got it into the journal the wanted. That being said, it would have been nice to see one more thing: a behavioural test for hearing in the synaptobrevin-1 KO mouse. This probably would have been quite easy and would have been the one piece of evidence proving that there is indeed transmission occurring, not just something that looks vesicular fusion.

Finally, I’d like to point out that this study may have a translational role to play. While a lot is known about the genes involved in congenital hearing loss, it looks like a good number syndromes have yet to be sorted out genetically. Perhaps comparing uncharacterized mutant loci associated with congenital hearing loss will help sort out what genes are implicated in IHC vesicular fusion, improving our understanding of congenital hearing loss and perhaps informing future gene replacement strategies.

Honor Roll

"The Gay Animal Kingdom" by Jonah Lehrer
Apparently controversial when it was published, and despite some heteronormative language, this is an informative and well written article that everyone should read and chase up.

"This Is Your Brain on Sports"
Big time sports writer Le Anne Schreiber writes an excellent, openly speculative piece on how the brain watches sports. A good example of a journalist writing pretty good, honest science, despite a misunderstanding of TMS.