Thursday, December 27, 2012

If you have been wondering where new blog posts are, I am taking a holiday hiatus. I'll see you in January when biweekly posting will be resumed, and the cellular scale will celebrate its first birthday!

Thursday, December 20, 2012

Sure playing video games (and therefore shooting in video games) releases dopamine, and sure if you inject dopamine into people while they shoot real guns they will like shooting guns better. BUT the key implication here, that shooting guns in video games makes you like shooting real guns demands evidence.

As a female Halo player myself, I think these Lady Spartans are awesome! (source)

Personally, I like shooter video games. I'm playing Halo 4 like the rest of the world right now and I played the heck out of Mass Effect earlier in the year. I have also shot real guns.

And guess what? shooting real guns is just not really my thing. I find it a little bit scary and not that fun or exciting. The idea of going to a shooting range and shooting guns at paper targets for an hour sounds really boring to me. Shooting skeet or something moving, like an animal, also sounds pretty boring.

I am skeptical about the idea that the dopamine released during shooting video games transfers to more enjoyment while shooting real guns. I am willing to change my mind upon seeing some data, but having seen nothing to support this direct transfer, I don't think it exists.

This post is written in response to "Addicted to the Bang: The neuroscience of the gun." by Steve Kotler and Jim Olds. (They don't actually claim that dopamine release during video game shooting directly causes addiction to real shooting, but I think that someone might get that idea from the article.)

To add these channels you have to extract the parameters from known data. This means extracting Boltzmann curves and time constant information so you can tell the channel which voltages activate it and inactivate it and how fast to open and close.

Activation (Boltzmann) curve for fast sodium channel

This step is tricky and can take a long time, but there is some software that can help. The Enguage Digitizer is one tool I could not live without.

Enguage is basically a tool that allows you to manually trace curves from published figures to get the curve data as an excel or .csv file. First you add axis points using the button at the top that has red plus signs on it. You tell the software what values each of the 3 corners of the graph are. Then you click the blue plus signs button and start to trace your graph, like so:

using Enguage digitizer to extract channel data

Then you export the data as whichever type of file you want. Pretty nice!
I like to have the data this way because then I can overlay this figure trace with any other trace I want and can manually fit an equation to it.

Channels are a hugely important part of a computational model. A recent paper from Eve Marder's lab shows that even with a very simple morphological model (just a soma), interesting electrical characteristics can be seen simply by manipulating the channels.

Kispersky et al., (2012) introduce an interesting paradox. They show that when you increase the sodium channel conductance you see more action potentials with low current injections (like 200pA). This is expected because the sodium channel is what causes the upswing of the action potential and more sodium is thought to mean more excitability. However, the authors find that when a high current injection is given (like 10nA), the increased sodium channel conductance actually decreases the firing rate. This is counter-intuitive because it goes against the more sodium=more excitability rule.

This is a pretty cool finding published in the Journal of Neuroscience using only a simple one-compartment model. The finding is based entirely on channel manipulation, and demonstrates how important these intrinsic channels are to any computational model.

Thursday, December 13, 2012

And now, let me answer your Seriously Deep Questions. All questions answered can be found in the LMAYQ index. And as always these are real true search terms that the all-knowing Internet directed to The Cellular Scale. Let's begin.

It is my personal opinion that thoughts do not actually look like anything. I've dissected many a brain and haven't ever seen one. However, let's suppose thoughts look like something, what would they look like?

One possibility is that the thought looks like what you are thinking about. A pretty ancient idea is that there are actually two of every object, one that is external (the actual object), and one that is internal which is our representation of that object. This can be taken quite literally in which case if you are looking at or thinking about a tree, your thought will look like a tree, but if you are thinking about a dog, your thought will look like a dog. This strikes me as unlikely.

So another way to look at it is what does the brain look like when it is having a thought? In this case there is some support for the 'thought looks like what you are thinking' hypothesis, but it is very limited.

Above is a famous example of how a visual stimulus can be reflected in the brain in a very literal way. In this case a monkey looks at a grid and the activation pattern in the brain looks like a grid. But these days 'thoughts' usually look like this:

And there is no obvious or literal relationship between the shape of the fMRI image and the thought that is thunk.

2. "Why Neuroscience?"

Because neuroscience is our best chance at answering important questions like 'what do thoughts look like?' and 'How do we know what we know?'

3. "Do neurons tell you how to move or do they fire in response?"

Another excellent and deep question. The answer is (of course) that they do both.

People used to think of the brain as a black box, where sensory input comes in (like through your eyes) and gets 'processed' by the brain and a motor output comes out (like through your hands).

All of these steps, the sensory input, the motor output, and the processing in between take neurons.
But of course there is the Venus flytrap which doesn't have 'neurons' per se, but does receive sensory input and generate motor output.

But the processing part of this process, the black box, is really complicated. There really is an unanswered question there about whether neurons are responding to something or telling something. When studies find that mirror neurons fire 'in response to' seeing actions performed, or that some amygdala neurons fire in response to pictures of animals, the question is always why are these neurons firing? Are the neurons telling another part of the brain 'this is an animal'? or are the neurons responding to that information?

The authors used two learning tasks to investigate how spines grow during learning. In the "reaching" task, mice had to reach their paw into a slit and grab a seed. In the "capellini handling task" the mouse is given a 2.5 cm length of (I am not making this up) angel hair pasta and learns how to handle it for eating. learning is measured by how fast the mouse eats the pasta.

The interesting difference between learning a specific task rather than just playing is that the spines grow in distinct clusters when the mice are taught a learning task. C shows the total spine growth, while D shows the proportion of clustered spines to total spines. Reach only means the mice were only taught the reaching task, and cross-training means they were taught both the reaching task and the pasta handling task.

The authors explain two possible functions for these spine clusters:

"Positioning multiple synapses between a pair of neurons in close
proximity allows nonlinear summation of synaptic strength, and
potentially increases the dynamic range of synaptic transmission well
beyond what can be achieved by random positioning of the same number of
synapses."

Meaning spines that are clustered and receive inputs from the same neuron have more power to influence the cell than spines further apart.

"Alternatively, clustered new spines may synapse with distinct (but presumably functionally related) presynaptic partners.
In this case, they could potentially integrate inputs from different
neurons nonlinearly and increase the circuit’s computational power. "

Meaning that maybe the spines don't receive input from the same neuron, but are clustered so they can integrate signals across neurons more powerfully.

And of course...

"Distinguishing between these two possibilities would probably require circuit reconstruction by electron microscopy following in vivo imaging to reveal the identities of presynaptic partners of newly formed spines."

Thursday, November 29, 2012

A recent paper from France details the making of a 3D environment that can facilitate 'realistic' neural growth. Labour et al. (2012) created a collagen biomimetic matrix which contains neural growth factor (NGF).

These scanning electron microscope images show the porous fibril texture of the collagen matrix. Most of the paper is spent explaining the methods for making this biomimetic matrix, but they also actually grow some pseudo-neurons (PC-12 cells) on the matrix.

They show that when cultured on top of this collagen surface, the cells extend neurons in three dimensions into the matrices and are affected by the NGF. (when there is no NGF, the neurites don't grow and the cells die.)

This paper is mostly about the methods, but I like the new possibilities that growing 3D cells opens up. With these biomimetic collagen matrices, the factors that cause specific dendritic arborizations in three dimensions can be analyzed. The environment can be completely controlled and the neurons easily visualized during growth. The authors suggest using these matrices to study neurodegeneration as well.

Another interesting thing this paper introduced me to is the 'graphical abstract.' I didn't know that that was a thing, but it seems like a good idea. However, trying to summarize an entire paper in one figure seems pretty difficult. Here is their attempt:

Labour et al. (2012) graphical abstract

I think it does actually get the feel of the paper across pretty well, though it's not really informative without the actual abstract next to it.

Sunday, November 25, 2012

I am sure this question has plagued many Wheel of Time fans, but only now has an experiment been designed to test it. Just 4 days ago, Homola et al. (2012) published a paper in PLoS ONE in which they have people guess ages of people in pictures and scan their brains.

The first interesting thing that they found was that the older the person in the picture (either a real picture of a real person, or a hybrid 'morphed' picture like the ones above), the harder it was to tell how old they were. This isn't really that surprising, as the range of ages that can 'look' a certain age gets wider over the years.

Homola et al., (2012) Figure 2B.

Here they plot the standard deviation in years for people's guesses as to the age.

The authors showed videos of the faces morphing from one age to another to volunteers while they were in the fMRI machine.

As a side note: they found that there was no difference between male and female volunteers. If they had I think a big deal would have been made about it. but since they didn't it's just a tiny sentence in a long paper.

Ok, back to the processing of age. They threw out the results from people who were really really bad at rating age because they 'weren't motivated' and weren't really trying apparently. (This could be a bit of cherry picking or data massaging) Then they compared the areas of the brain that were active for people who were really really good at guessing age, and people who were only average.

Homola et al., (2012) Figure 4D

The basic finding was that the posterior angular gyrus area (pANG) in the left hemisphere was 5 times more active for the expert age guessers than it was for average. Conclusion: pANG is important for age-processing. This on its own is good to know, but not amazingly interesting. What I think is cool is the idea that the authors present as a follow up experiment in their discussion:

"Even though our study highlights pANG as one key component for age
processing, its precise role in this context is still speculative and
needs further investigation. Our model, illustrated in Figure 7,
gives rise to interesting hypotheses: One testable prediction would be
that disruption of left pANG activity using transcranial magnetic
stimulation (TMS), for example, should impair numerical age but not
gender judgements, and that brain lesion-symptom mapping can eventually
dissociate the two. " Homola et al., (2012)

So now we know, the Aes Sedai must have some magic that transcranially impairs pANG in everyone around them so they can't guess their age. That is how to stay truly ageless.

Tuesday, November 20, 2012

Latif and Bozkurt from North Carolina state university recently presented a paper (though I can't find a peer-reviewed publication on Pubmed), explaining their Biobot. They use the Madacascar hissing cockroach...

... and attach a electrically stimulating 'backpack' (see first picture). They then stimulate the the antennae in a variety of ways to 'steer' the Biobot.

"In these studies, electrical pulses were applied to the insect to create biomechanical or sensory perturbations in the locomotory control system to steer it in desired directions, similar to steering a horse with bridle and reins." -Latif and Bozkurt

This is very similar to the backyard brains Roboroach, but the system created by Latif and Bozkurt is extremely precise. Rather than just making the Biobot turn when stimulated, Latif and Bozkurt can make the cockroach walk a specified line.

Pretty cool. The authors note that generally the cockroaches want to walk straight until they encounter an obstacle (or stimulation). So, sure, this is sort of like steering a horse with reins, but the horse has to be trained to know what the bridle signals mean. This setup is more like creating a virtual reality for the cockroach, where it thinks that it has 'run into' something at certain points on the line. This is similar to creating a virtual reality for worms by stimulating specific neurons with light.

Of course the practical applications of this are a little iffy. People always seems to say that these little insect-bots could be of use in disaster settings where people need to get some ground level surveillance of a rubble-littered area, but I think the scientific applications for this are what is really exciting. Being able to create a virtual reality of any shape or size could allow for tests of spatial navigation in the cockroach. You could even try to train the cockroach to find something or avoid something and the 'confuse it' by changing the virtual environment suddenly. Could it adapt?

Friday, November 16, 2012

Steps 1 and 2 of neuron-building, as well as an important set of shortcuts can be found in the How to Build a Neuron index. Step 3 is deciding which simulation software or programming language you want to use.

The big two are Genesis and Neuron. They are pretty similar in a lot of ways, but Genesis runs in Linux and Neuron runs in Windows. However, you can run Genesis in Windows if you install the Linux environment Cygwin.

Both programs can read in morphological data, but they use different syntax and coding procedures. There are other types of neural simulators as well, and an ongoing problem in the field of computational neuroscience is compatibility between programs. If someone has done the work to make a beautiful Purkinje cell in Genesis like the one above, it will take a lot of time and effort to translate that neuron into a different simulator such as Neuron.

Gleeson et al., (2010) explains this problem and presents a possible solution in the form of the "Neuron Open Markup Language" or NeuroML.

"Computer modeling is becoming an increasingly valuable tool in the study
of the complex interactions underlying the behavior of the brain.
Software applications have been developed which make it easier to create
models of neural networks as well as detailed models which replicate
the electrical activity of individual neurons. The code formats used by
each of these applications are generally incompatible however, making it
difficult to exchange models and ideas between researchers....Creating a common, accessible model description format will expose more
of the model details to the wider neuroscience community, thus
increasing their quality and reliability, as for other Open Source
software. NeuroML will also allow a greater “ecosystem” of tools to be
developed for building, simulating and analyzing these complex neuronal
systems." -Gleeson et al (2010) Author Summary

NeuroML is basically a "simulator-independent" neuronal description language. A neuron built with or converted to NeuroML should be able to run on Neuron, Genesis, and plenty of other platforms. Gleeson et al. validated NeuroML by using a simulated pyramidal neuron converted to NeuroML format and run with several different simulators.

Gleeson et al., (2010) Figure 7

Zooming in:

Neuron, Genesis, Moose, Psics comparison

All the simulators overlay so tightly that you can barely tell that they are separate lines.

So when building you neuron, take care to follow the NeuroML format and then you and others can use it with any simulator you want.

Sunday, November 11, 2012

Action potentials are the main means of communication between neurons, and their exact timing can be really important. But the specific timing of action potentials is really important in the auditory system, because the auditory system encodes (among other things) information about sound wave frequency.

I've previously written about auditory processing with regards to the wonder that is the chicken brain, but today we will focus on timing-specificity in the mammalian brainstem. Specifically, some weird channels in the Medial Nucleus of the Trapezoid Body (the MNTB).
﻿

A paper from the Kaczmarek lab at Yale explains that these sodium-activated potassium channel (SLICK and SLACK) are present in the mouse auditory brainstem and contribute to the 'temporal accuracy' of the MNTB neurons. Yang et al. (2007) record the action potentials from these neurons at a range of frequencies and show that the neuron can 'keep' up with the frequencies better when more sodium is present.

In the figure above, the 'flatter' the line, the better the 'temporal accuracy.' They also made a computational model of this neuron and ran simulations altering the sodium values and reversal potential.

Yang et al., 2007 Figure 9D

Their model simulations are similar to their experimental recordings, in that more sodium results in more temporal accuary of the action potential. They confirmed that this was dues to a sodium-activated potassium channel by directly activating SLACK and seeing a similar improvement in temporal accuracy.

The SLACK channel still blows my mind, but its role in helping the auditory system fire with the utmost precision actually makes a lot of sense.

Thursday, November 8, 2012

I can't think of any situation where you might be inclined to soap up your brain (except maybe if you had recently been trepanned), but it is still a bad idea.

you can actually buy soap shaped like a brain here. (It smells like bubble gum!)

When used on say, an oily frying pan, soap + scrubbing will trap the oil in little units which can be rinsed off. Without soap, using only water, the oil which is hydrophobic (meaning it would rather stick to anything besides water) will stick to the pan rather than the water.

How does this relate to the brain? Well the cell membrane which helps give the shape to the neurons is made up of a lipid bilayer. These lipids have a hydrophobic tail (which hides in the middle of the layer) and a hydrophilic head which faces outward, just like the oil particles above.

So basically if you scrubbed your brain cells with soap, the membrane that holds the neuron together would be disrupted. Scientists actually use this principle to get stuff (like DNA) out of a neuron. In DNA extraction, there is a lysis step in which a detergent (like SDS) is applied to the tissue and given a good shake. This disrupts the membrane and allows access to the contents of the neuron.

You can wash your skin with soap because the living skin cells are protected by an outer dead skin cell layer. Though if you soap up too much, you can actually dry out yours skin by stripping it of lipids faster that they can be replenished. See "How much should you shower" for an excuse to stay in bed tomorrow morning rather than get up and shower.

Sunday, November 4, 2012

A lot of fuss has been made recently about the street drug "Special K" (ketamine). It's basically an anesthetic used in labs and veterinary offices to tranquilize mice, rats, cats, and (famously) horses, but recently its been lauded as a newer faster anti-depressant.
﻿

The possibility that it might have near immediate anti-depressant effects on humans has been around for a little while, but the concept is picking up steam as new research finds mechanisms for how it might actually work in depressed patients. (I briefly mention one new study in an SfN neuroblogging post. )

An emerging theory is that depression is not so much a chemical imbalance as it is a loss of neurons. Thus the cure for depression is not restoring the balance of serotonin or dopamine, but restoring the growth of new neurons. Some suggest that this is how classic anti-depressants (like Zoloft) work, by fixing the neuron atrophy problem. This could also explain why these anti-depressants take so long to work, though I have expressed skepticism about this hypothesis.

So the question is: Does ketamine cause the growth of new neurons, help in their maturation, or prevent neuronal atrophy? Ketamine is an NMDA receptor antagonist, so it inhibits synaptic transmission. It doesn't inhibit all synaptic transmission like deadly poisons do (tetrodotoxin for example), but enough of it to change something in the brain. Knowing something about NMDA receptors, it was still hard for me to conceive of a connection between blocking them and neuronal growth.

A nice review by Duman and Li (2012) spells it out for me, explaining new research that links ketamine with the growth of new synapses.

Duman and Li 2012 figure 3

The idea is that ketamine blocks the NMDA receptors on the GABAergic (inhibitory) neurons, so there is less inhibition and more glutamate. When there is more glutamate, there is more BDNF (brain derived neurotrophic factor). BDNF helps synapsse grow by triggering a cascade of events (via mTOR) which causes more AMPA receptors to be inserted into the synapse, making the synapse stronger, more stable, and more mature.

The authors cite their previous Li et al., 2010 Science paper explaining that when they block mTOR with the drug rapamycin, the effects of ketamine on new spine growth disappear and its anti-depressant effects disappear. However, this is a study in rats and assessing the depressed state of a rat is as tricky as assessing a rat's post-traumatic stress. So the claim here isn't so much that ketamine causes neurogenesis, but that it could help new neurons become synaptically mature, and thus functionally useful. (Carter et al. is investigating this further)

As shiny and interesting as this is, I am not quite sold on it. I don't see how the NMDA antagonist is going to inhibit the inhibitory neurons more than the excitatory neurons, and I would love to see research showing how ketamine causes glutamate accumulation.

And as far as actually using it as a treatment for depression, there are some serious side-effects. Ketamine is a hallucinagenic street drug which can cause a schizophrenia-like state. Therefore, it seems unlikely that ketamine itself will ever be prescribed as an anti-depressant, but new research could reveal (or synthesize) other molecules that activate mTOR directly or somehow bypass the hallucinogenic aspect of ketamine.

Friday, November 2, 2012

Time to get back to Answering Your Questions. It has been a whole month, and I have had some great search engine queries lead to The Cellular Scale. As always, I am pretty sure whoever asked The Internet these questions did not find an answer on this blog. Since I hate to disappoint, here are some real answers to some real questions:

1. "How to make a football out of construction paper"

I did write about how bipolar neurons look like footballs, but never explained how to make a football yourself. You take a piece of construction paper or notebook paper and fold as diagrammed in the following image.

Well, there you go, now you can draw the vacant-staring psychic platypus-duck from Pokemon. Here's a little more about Psyduck:

Psyduck is constantly stunned by its headache. It usually stands
immobile, with a vacant expression, trying to calm its headache.
However, when its headache becomes too severe, it releases tension in
the form of strong psychic powers. (from Bulbapedia)

For the record, I do not endorse the scientific validity of the Bulbapedia website.

4. "How to kill a small man."

Hmmm. This is a tough one. How small is this man? If he is very very small, you could put him in a tupperware container without poking air holes. If he is a little bigger, you could probably stuff him in the refrigerator. That's fatal, right?

I suppose I am glad you did not find an answer to this particular question at The Cellular Scale.