Wednesday, April 25, 2012

We've discussed recent findings about erasing fears from memories, but today we'll be talking about erasing the fear memory itself. This involves actually inhibiting or killing the individual neurons that encode for a particular memory, so for obvious reasons these experiments are done on mice rather than humans.

Mice can be trained to associate a mild electrical foot shock with a tone. The tone plays and then a foot shock is given. Once the mouse has learned this association, it will freeze in place when the tone is played. This is called an auditory fear memory.
Using a fear memory paradigm, Sheena Josselyn in her Toronto lab discovered how to visualize the neurons which are active during fear memory formation. They also developed a way to target and delete them, consequently deleting the memory.

In Han et al. (2009), some beautiful genetic trickery was used to promote a 'kill switch' only in the neurons which are active during the memory formation. This kill switch is the diphtheria toxin receptor. Normally cells do not have this receptor, but when they promote this receptor artificially on the cell surface, an injection of diphtheria toxin will kill that cell, but not neighboring (dtr-free) cells. The real impressive genetics is in promoting the diphtheria toxin receptor only in neurons active during memory formation. To do this, the Josselyn lab used a marker for cell activity in amygdala neurons during memory formation, CREB. Specifically, they used a transgenic mouse that expressed the diphtheria toxin receptor only when CREB activates cre.

So now with the memory encoded and the kill switch in place, they pull the trigger and inject diphtheria toxin into the mice. This kills all the amygdala cells that were active during memory formation (about 250 amygdala cells or so, Han et al., 2009 figure 1B). They then test the mice again for freezing behavior.

Han et al., 2009 Figure 3

The second set of columns (CREB-cre, DT) is the experiment I have described. Before any drug is injected the mice freeze in response to the tone, but after the diphtheria toxin (DT) injection, the mice freeze much less in response to the tone. What is really essential to this study is the control experiments that they ran.

They wanted to make sure that just killing any 250 neurons in the amygdala didn't causes memory loss. So instead of using the CREB promoter to activate cre (and thus the diphtheria toxin receptor) they used a control promoter (cntrl-cre, DT above) to promote cre in about the same number of neurons, but not dependent on neural activity. In this case, there is no statistical difference in how much the mouse freezes in response to the tone. (compare the first two columns to each other.)

Similarly, they wanted to make sure that the diphtheria toxin (DT) itself didn't erase the memories. They injected CREB that did not promote cre, and thus did not cause any diphtheria receptors to be expressed (CREB, DT). In this case, there was again no difference between pre and post DT injection. Finally, they wanted to make sure it wasn't the CREB-cre construct itself, so they added the CREB-cre like normal, but did not inject the diphtheria toxin, so the receptors were expressed on these cells, but were not activated. In this case again, not difference in the amount of freezing.

Because none of these control groups showed a difference in freezing, Josselyn could be confident that she had really shown that the specific neurons that encoded the memory were necessary for recalling the memory.

They are also clear that the amygdala is not seriously damaged in this study, as the mice can re-learn the task after the specific neurons have been deleted.

One particularly interesting aspect of this study, which the authors do not discuss, is the number of neurons necessary for encoding a memory. They delete hundreds of neurons. I wonder if deleting half of them or even a quarter would result in the same erasure of the memory. How many neurons does it take to encode a memory?

Recently this concept of targeting proteins to only the active cells has been extended to include channel rhodopsin, the protein which allows cells to be activated by light. Liu et al., (2012) was able to reactivate the neurons that were specifically active during the learning of a fear response. Stimulating these neurons caused the mouse to freeze, suggesting that stimulating these neurons reactivates the memory. This paper is covered thoroughly by Mo Costandi at Neurophilosophy.

Sunday, April 22, 2012

A few readers were kind enough to take the online typing tests that I linked to and report their results. Unfortunately there are too few Dvorak users out there, so no new results from them. However, the Qwerty users had some seriously fast fingers, so I had to change the scale of the graph!

This piqued my curiosity and I wanted to know how fast the FASTEST typists could type, I also wanted to see them in action. so for your viewing pleasure, here is Sean Wrona winning the 2010 SXSW typing championship with 163 wpm. (I gather from the internet that he can actually type as fast as 237 wpm)

He types in Qwerty, and since he could apparently type 80 wpm at age 6, I imagine the keyboard format is pretty ingrained in his brain.

Which brings me to another point: I would like to look at this guy's brain.

But what exactly would I be looking for?

Consistently 'exercising' a part of the brain can result in visible structural changes there. A classic example of this is the taxi drivers who show navigation based changes in hippocampal structure. (Maguire et al., 2000)

The hippocampus is cool and all, but I wouldn't expect to see typing-dependent changes there. It traditionally has much more to do with episodic memories (yes, Proust), and spatial navigation (yes, Place Cells).

striatum is the striped area

The brain structure that I might expect to be affected by extreme typing expertise is called the striatum (a part of the basal ganglia). While it receives less attention than the hippocampus and amygdala, the striatum is a fascinating structure crucial to forming habits, addiction, and learning motor skills. Playing the piano, kicking the soccer ball, typing, and almost anything that people refer to as 'muscle memory' is a motor sequence learned with the help of the striatum.

A recent study from Korea has compared the size of striatum in basketball players (6 hours of practice a day) to the size of the striatum in non-athletes matched for height and weight. Park et al., (2010) found that both the absolute and relative sizes of the striatums (in both hemispheres) were larger for the basketball group than for the non-athlete group. They give a few reasons why this might be so (more cells, more blood flow to that region etc), but nothing conclusive.

While this is an interesting study, it is very limited. The question remains: What aspect of basketball (if any) is causing this structural difference? And there are many possibilities:

1. People who have bigger striatums to begin with are more likely to play basketball, and the 'structural change' is not due to the basketball playing at all.
2. Exercise itself causes striatal enlargement.
3. The teamwork and interactions causes striatal enlargement.
4. Hand-eye coordination causes it
5. learning the game of basketball causes it

and so on...

I would like to see a more thorough experiment, using something as simple as this foursquare design:

If both the basketball players and the runners have larger striatums, exercise would be implicated. If both the basketball players and the piano players have larger striatums, then the skill learning would be implicated. If all three groups have larger striatums than the couch potatoes, that could be a sign that being a couch potato is pretty bad for your brain.

So getting back to the expert typist scenario, which box does the typist fit in? It is clearly not exercise, but it is not exactly equivalent to piano playing either. The piano player learns new songs on a regular basis, while the typist doesn't learn new paragraphs or new sentences in the same way. If it is the skill learning that 'grows' the striatum, then typing practice might not do anything past a certain point. It might take learning a new keyboard style to stimulate 'skill-learning' based growth.

In conclusion, the striatum (and possibly the cerebellum) might be an interesting place to look for brain changes in typing experts. However, the particular skill of typing fast in not necessarily the most likely skill to cause changes in the striatum.

Tuesday, April 17, 2012

The Dvorak keyboard is an alternative to the traditional Qwerty layout. Proponents (like me) claim that it is faster and easier to use. Dvorak himself claimed in a 1943 National Business Education Quarterly paper "There is a better typewriter keyboard" that experts could type 35% faster in the Dvorak layout than in the Qwerty layout. (value cited in this paper, I could not locate original)

I started using Dvorak during my freshman year of college because some guy told me it was cool. I converted my computer's keyboard format to Dvorak and re-arranged all the keys of my 1st generation iMac.

I feel old.

I was not much of a 'typer' before attempting Dvorak. I was a step above 'hunt and peck' (I used multiple fingers), but I couldn't type without looking at the keyboard. It wasn't long before I became much faster typing in Dvorak than in Qwerty, and could touch-type for the first time in my life.

I now change all computers I use to Dvorak, but do not change the physical keys on the keyboard. This has resulted in some lovely events such as my work-study boss in college thinking her computer was 'haunted' because I forgot to change the format back before leaving the office. It has also resulted in some embarrassing moments for me when I am forced to return to a Qwerty layout. During a presentation on some new neuro-software, I volunteered to test it out. This was a bad idea, because of course the presenter's computer was set to Qwerty. I not only typed super-slowly, but I couldn't put in a familiar password at one point. I knew the password by touch, and without the letters showing up as feedback, I literally could not type it correctly.

Despite the occasional problem, I love typing in Dvorak. I find it much easier and more natural than typing in Qwerty. However, since I have been typing in Dvorak since iMacs were cool, my favoritism is probably due to familiarty more than some inherent 'betterness'. I can hardly be objective here.

For some real objective analysis we need some peer-reviewed studies. Luckily the Human Factors and Ergonomics Society cares about this sort of thing.

In a 2009 paper Anderson et al. investigated just how steep the learning curve was for a variety of alternative keyboards.

In this study, participants typed a familiar passage (having practiced it 10 times with the normal Qwerty keyboard) 5 times on an 'alternative' keyboard. The researchers then plotted the time it took to type the passage.

Anderson et al., 2009 Figure 3

The split keyboards are Qwerty layout keyboards, just angled differently for ergonomic purposes, so it is not too surprising that they resulted in fast typing times. The Dvorak and chord keyboards were more difficult for the participants, but both showed strong learning curves.

This study says nothing about how 'experts' type on any of these keyboards, so I decided to test myself.

Online, you can test your typing speed by typing in random words or passages for 1 minute.
I tried these tests 3 times each in Dvorak and Qwerty (alternating). Not surprisingly, I was much better in Dvorak.

open symbols= random words test, filled symbols=passages test

The random words test is much easier than the passages test which includes punctuation, but in both tests I was faster in Dvorak.

But of course I don't type in Qwerty regularly, so this isn't exactly the right comparison. To rectify this, I got help from a Qwerty user who was so kind as to try the passages test 3 times for me. My Dvorak passages test were slightly better than the Qwerty-user's passages tests (filled red circles compared to blue squares). One person per group is hardly proof and couldn't even count for preliminary data, so don't quote this figure as proof that Dvorak is faster or anything. It could just as easily be proof that people with brown eyes (me) are better typers that people with blue eyes (Qwerty-user). This was just some good old fashioned dorky fun-with-data.

If you want to add data points to my table, go ahead and take the typing tests yourself:

Saturday, April 14, 2012

Place cells are neurons in the hippocampus that fire when an animal is in a particular location. Like many other cases where a neuron activates in response to something specific, the question everyone wants to answer is 'why does the neuron fire at that particular spot?' A study published 1 year ago today used a quite difficult technique and a combination of patience and extreme persistence to look more deeply into the intracellular properties of individual place cells.

Previously people have studied place cells using a technique called 'extracellular recording.' This technique involves implanting a recording electrode into the hippocampus of a rat, mouse, or bat(sometimes human, if the electrode is being implanted for health reasons). This recording electrode can tell when a neuron close to it spikes (i.e. fires an action potential), and the time of the spike can be matched to a video recording of the animal moving around in space. The above image represents a top-down view of a square box where the rat was allowed to run around freely. The black line is where the rat moved during the recording and the red dots indicate where the rat was each time a specific neuron fired. You can see that this particular neuron fired only when the rat was in a certain area.

Extracellular recording has been used extensively to investigate how place cells develop, adapt to new environments, and even how they are remembered. However, this technique can only show when a neuron spikes. It can't reveal any information about intracellular characteristics.

Epsztein et al., (2011) uses a new technique to investigate what is happening inside a place cell. The technique they use is called whole cell patch clamp. In whole cell patch clamp, a glass micro-electrode, which is filled with a salt solution similar to that found inside actual neurons, is lowered so that it is right next to the surface of the cell (the opening of the glass micro-electrode is smaller than the cell body). The cell membrane forms a seal around the tip of the micro electrode, and then brief suction is applied to break a hole into the cell. Once the hole is made, the electrical signal of the neuron can be measured through the micro electrode.

This is a difficult technique because any slight movements of either the cell or the glass micro electrode could break the seal and sever the connection. This technique is commonly used in slices of brain or in cultured brain cells and is done on a vibration isolation table to prevent jostling of the cell and micro electrode. I am very familiar with this technique and its difficulties, so I am beyond impressed that Epsztein et al. were able to used this technique in a moving rat!

Epsztein et al., 2011 Fig 5

While the use of this technique in freely moving rats is difficult, the findings are certainly interesting enough to justify the effort.

The authors found that before the rat was put in the maze, the cells that turned out to be place cells were physiologically different than the cells that turned out not to be place cells (so called silent cells). Specifically the future place cells spiked in a more 'bursty' pattern (see image), while the future silent cells spiked in a more 'regular' pattern.

Previous theories about how place cells were generated mostly focused on what inputs the cells were receiving, not their intrinsic properties. What makes this finding so fascinating is that the intrinsic cellular properties which govern the spiking pattern of the cell actually predicts whether they will be a place cell or not. The inputs onto these cells may be important for organizing which cells fire at each particular place, but the cell must have certain intrinsic qualities to become a place cell to begin with. In the author's words:

"Therefore what intrinsic factors may predetermine is the restricted subset of cells that could potentially have place fields. Moreover, among the set of possible place cells, the relative locations of their place fields also appear to be predetermined."

One big issue that the authors bring up in their discussion is that of 're-mapping.' Place cells are specific to the environment that the rat is in. When the rat is moved to a new environment, it forms new place fields with new cells (though some overlap). The important thing is that sometimes cells will be silent in one environment and have place fields in a different environment. It's really not clear whether these cells can modulate their intrinsic properties fast enough to 'become' place cells from silent cells, or whether there are some cells that are never going to be place cells no matter what environment they are put in. Because this technique is so difficult, these questions are not likely to be clarified very soon. But, at least now we know that we should be asking them.

Wednesday, April 11, 2012

As I have recently explained, The Cellular Scale wants to weigh the worth of certain claims made by the media, individuals, and scientists.
To start with, we will investigate the claim that hot flashes can cure cancer. I just heard someone say this the other day, so there is no media or peer-reviewed source to condemn. However hearing it from a non-scientist in a completely non-scientific context leads me to believe it might be something that is popularly accepted, and therefore merits a good close look.

Here is the exact quote I heard:

"The temperature reached during hot flashes in menopause is exactly the temperature at which cancer cells cannot survive."

There are some obvious problems with this specific claim. If this is true as stated, then no one would have cancer cells remaining in their body after undergoing a hot flash and heating up the body would be the undisputed cure for cancer.

"A new study shows that having symptoms such as hot flashes during menopause appears to be tied to a lower risk of the most common kinds of breast cancer."

This claim is based on an actual paper. The paper suggests that this connection is due to differing estrogen level in women with and without symptoms

"Prior studies indicate that women with menopausalsymptoms have lower estrogen levels because they go through menopause as compared with women who do not experience them. Given the central role of hormones in the etiology of breast cancer, a link between menopausalsymptoms and breast cancer is plausible. However, no prior studies have evaluated the association between menopausalsymptoms and breast cancerrisk....This is the first study to report that women who ever experienced menopausalsymptoms have a substantially reduced risk ofbreast cancer, and that severity of hot flushes is also inversely associated with risk." (from the abstract Huang et al., 2011)

The more severe the hot flashes the lower the risk of breast cancer is an exciting and useful finding, but the paper makes no claim that this is becauseof the heat. It certainly doesn't make any claims about other forms of cancer, or the viability of already present cancer cells during a hot flash.
Finally, this is a classic example of correlation vs. causality. The finding that the women with severe hot flashes have a lower risk of breast cancer does not mean that the hot flashes prevent breast cancer. (Hot flashes might cause the reduction in risk, but the research hasn't shown that yet) In fact, it seems equally, if not more likely that one mechanism causes both severe hot flashes and a reduced risk of breast cancer.

Friday, April 6, 2012

I am not going to lie, I recently got caught up in Hunger Games fever, tearing through all three books at a breakneck pace and staying up way too late doing so. While these books raise interesting questions on some of my favorite topics (like 'how much is too much to sacrifice for victory?'), one particular neuroethics issue jumped out and stung me.

Without divulging any plot points or spoilers, I will explain:

In the last book, Mockingjay, a good guy is taken hostage by the bad guys. Although you never see any actual scenes, it is clear that this person is being tortured for information. One particular form of torment used on this character is called Hijacking.

Injected with the hallucinogenic venom of the mutated wasp (the tracker jacker), this person is forced to recall memories and watch videos of people s/he loves. This disoriented and unquestionably negative emotional state then alters this person's memories such that when s/he finally sees the familiar faces, s/he distrusts them, hates them, and wants to kill them.

This portrayal of neurotorture (yes, you can put neuro in front of any word) brings up several questions:

1. Could this really work?
2. Has any one every tried it?
3. Is it wrong?

Let's take a deeper look:

1. Could this really work?
There is no such thing as a Tracker Jacker, but in principle, could a mood or perception altering drug be used on a person to change their memories?

A drug that depleted a person of dopamine or serotonin, or in contrast flooded them with dynorphin, could depress someones mood and possibly make them paranoid or distrustful. Could re-opening a memory during a suspicious, paranoid mood cause someone to re-encode that memory with doubt, distrust, misery, or hate? Or could the addition of a powerful hallucinogen, result in the person not being able to tell which memories were real and which were not?Kindt et al., in 2009 showed the opposite was true, that application of a beta-blocker (an anti-anxiety drug) during the recall of a fearful memory could dampen the fear response associated with that memory, while the drug alone (without the re-opening of the memory) had no effect.

So my answer: yes, to some extent. If you can open a memory and extinguish the fear, why couldn't you open a memory and instill the fear?

Could this method sow doubt and confusion in a prisoner's mind? yes.
But, could it make some one ready to kill their old allies? not too likely. I think it would take some seriously extensive and targeted hijacking to even come close to something like that.

In my opinion, the most likely outcome to any hijacking attempt with current known neurological targets would be to drive the prisoner into despair and madness. I doubt you could 'reprogram' a person to kill a specific target.

2. Has anyone tried this?

This is a pretty tough question. If a government has tried this, it is likely a secret, and all the sources I can find online explaining how governments weaponize LSD or whatnot appear about as reliable at The Men Who Stare at Goats. (so I am not adding links to them here, google it if you want some serious theorizing)

Answer: I really don't know, but I want to know.

3. Is it wrong?

In one sense, the answer seems an obvious yes, so I will re-phrase this question into a slightly more complex one: Is neurotorture worse than physical torture?
Is it a greater violation of human rights to take away their identity, their loyalty, and their ability to make rational decisions rather than hurting their physical body?

In a sense it seems much worse. It was certainly much more heart wrenching to read about hijacking and its repercussions than to read about physical torture. But why?

It could be argued that the whole point of the physical torture is to break a person's mind and take away their ability to make rational decisions. And if you have a physically non-painful neurochemical shortcut to do so, why shouldn't you use it? Maybe it would save every one's time, get that critical information soon enough to stop the terrorist attack, and even protect the prisoner's body from pain.

So why does it seem so distasteful? Is it important to give the person a chance to resist physical torture? Is that more fair?

My answer to is neurotorture wrong? yes, but not more wrong than physical torture.

Readers, I am sure you have opinions and I am curious to hear them. please express your opinion here or in the comments section.

neurotorture:

To make things even more complex, what if instead of neurotorture, the opposite tactic was used. what if a prisoner was given extensive repetitive doses of oxytocin to try to hijack their trust? Is it ok to purposefully induce a form of stockholm syndrome in your prisoners? This would be a physically or psychologically non-painful way to get a prisoner on your side.
Would it work? possibly.
Has it been tried? no idea.
Is it wrong? good question.

Tuesday, April 3, 2012

Glial cells are non-neurons that populate the nervous system. The name 'glia' comes from the Greek word for glue, and these cells were originally thought to be 'filler' cells or brain glue (not this kind).

In a sense these cells are 'filler'. When the brain is damaged, it is glia not new neurons which grow into the void. (This can sometimes turn cancerous and lead to glioma)

A recent review paper poetically summarizes the traditional role of glia:

"Astroglial cells were long considered to serve merely as the structural and metabolic supporting cast and scenery against which the shining neurones perform their illustrious duties." (Lalo et al., 2011)

This lovely summary is an obvious set up for a paper showing that "actually glia are quite important."

And indeed they are.

Even though they don't fire action potentials, glial cells have electrical activity and are involved in information processing.
Glial cells have receptors for neurotransmitters (such as glutamate and GABA). These are the very same types of receptor that neurons use receive signals from other neurons at the synapse.

Lalo et al. point out four different ways that these receptors might work on glial cells:

Glia might respond to neurotransmitter released for non-synaptic (ectopic) sites.

Glia might respond to transmitter released from other glia

The receptors on glia might be activated by 'ambient' neurotransmitter.

While it is not clear which of these receptor-activating mechanisms predominates on glia, there is evidence from different brain areas for each type of information transfer.

No matter how these receptors are stimulated, they can depolarize the glial cells and even induce calcium transients. Lalo et al. explain that these actions might cause the glial cells to release lactate which is taken up by neurons as an energy source.

In short, the role of these glial cells might be mainly metabolism control near synapses, and the ionotropic neurotransmitter receptors might be the mechanism that signals when, where, and how much metabolism control is needed.

Originally when I started this blog (waaaay back in January 2012), I thought I would do something similar, find outrageous claims in the press or the scientific literature and explain what was wrong with them. The "Cellular Scale" was supposed to imply the weighing of these claims and judging them on their scientific worth. This name would have been delightfully clever if I had actually stuck to this original plan.

I suppose there are 3 reasons why this didn't happen:

1. I didn't immediately find many outrageous claims specific to neurons (most of the claims are a little 'zoomed out' from the cellular scale and involve whole human brain areas), so I only managed to produce one (not very) skeptical post.

A. Stuck to cellular-level neuroscience for the most part. Even though my 3 most - popular - posts are not about cells at all.

B. Posted something about twice a week.

C. Not run out of ideas. I was worried about this at first, but now every time I hear something interesting I think 'I could blog about that' and actually have a list of ideas that is growing faster that I am posting.

Over the next 3 months I want to:

i. Get back to my original plan and clear up some misconceptions people might have about cells.

ii. Get more comfortable on Twitter. Right now it is like being at a party eavesdropping on a super-interesting conversation between people I don't know.