Wednesday, May 30, 2012

What is a sociopath? A killer? A raging lunatic?
Martha Stout has written a relatively short book (though not as short as it could have been) defining the sociopath and arguing that sociopaths are surprisingly common, 4% of the population in fact.

There are two things I liked about this book and two things I really did not like about this book.

First the good:

1. Dr. Stout effectively argues against the stereotype that sociopath=serial killer. She defines a sociopath as someone without a 'conscience' who is incapable of real empathy. If this is combined with 'bloodlust' the person very well may turn out to be a serial killer. But if it is combined with 'preferring inertia', the person will manipulate their way into a situation where they are taken care of and don't have to do anything.

2. A very powerful insight that Dr. Stout has is that the very fact of having a 'conscience' is what keeps 'normal' people from identifying and stopping sociopaths. A person with a 'conscience' will easily take the point of view of the sociopath and try to justify or explain their actions in terms of 'having a conscience'. "they are just depressed", "they didn't know this or that", "there must have been a miscommunication". It is almost impossible for a person with a conscience to comprehend someone doing something manipulative or cruel for essentially no reason, so they try to invent a reason that would make the sociopath's actions comprehensible to them. The very thing that the sociopath is lacking: the ability to empathize and identify with the wants and needs of others, is the thing that prevents 'normal' people from identifying them.

Now the bad:

1. What the heck is a 'conscience' anyway? Dr. Stout talks about the conscience like it is some brain structure that you either have or don't have. I wasn't convinced that people are either complete sociopaths or completely normal. I assume there is a continuum, and Dr. Stout does nothing to convince be otherwise, while at the same time constantly implying that it is an either/or situation.

2. I would have been much more interested if this book had delved into the possible neural underpinnings of conscience. It is getting a reward signal from the 'happiness' of others? Getting a pain signal from the pain of others?

An example that came to mind is a scene from the movie Pan's Labyrinth that will probably haunt me forever. A man has had his mouth slit and there is a very painful-to-watch scene where he sews up his own cheek, bandages it, and then takes a shot of vodka. The vodka seeps out his cheek into the bandage. I remember having a very physical reaction to this part of the movie, literally cringing and grabbing my own cheek.

What I want to know is do sociopaths have this same physical reaction, or would the 'not having a conscience' or 'inability to empathize' prevent something so instinctual.

Well lucky for us, someone else is wondering this same thing something similar. A 2008 paper tested healthy individuals for degrees of 'psychopathy' with a self-report questionnaire, and then measured their responses to transcranial magnetic stimulation (TMS) of the motor cortex during videos. They claim that this is somehow measuring mirror neuron activity, but I think that it going too far since they are not measuring individual neurons.

They used a nice set of controls to specifically isolate the effect of watching something potentially painful. The videos were of a qtip touching a hand, a needle touching a hand and a needle touching an apple. They also ran all the videos stopping them early (before the contact between the objects is made).

In 'normal' people, seeing a needle poking a hand causes a reduction in response to TMS stimulation. What this reduction in cortical excitability means is not clear, so any finding in this study will be hard to interpret. They found that the degree of response reduction was not correlated with the psychopathy index when taken as a whole, but when they isolate the 'coldheartedness' component, they find that the more cold-hearted a person reports they are, the stronger the signal reduction is during TMS. Despite the nice controls, this study leaves a lot to be desired. They have a small sample size (n=18), and even their relatively mined correlation is not very strong (R=-0.58). In addition, the increase in signal reduction is 'supposed' to indicate 'more empathy', so the meaning of this study is basically ripe for the cherry-picking. Fortunately they don't spend a ton of time speculating wildly about what this reduction might mean, they simply say that their study finds a 'link' between motor empathy and cold-heartedness and end with the classic 'more studies need to be done'.

Unfortunately the information that can be gleaned from this study is pretty limited, and the brain of the sociopath is still a mystery.

A caveat: I assume someone has studied whether the physical reaction that I describe above occurs in sociopaths or not, but I did not find a study testing it. If you know of one, please send it my way.

Friday, May 25, 2012

The fungus-controlled zombie ant is one of nature's greatest wonders. A fungus (e.g. O. Unilateralis) is inhaled by an ant (e.g. Camponotus Leonardi), and begins to grow inside its body. Eventually the fungus infests the brain of the ant, causing it to drunkenly wander, periodically convulse, climb up a leaf and clamp down on its ridge. Once the ant is securely in place, the fungus devours the brain and innards of the ant and grows out the back of its head often (but not always) releasing its spores onto the ground below. Un-freaking-believable, right?

As if this wasn't amazing enough, it's not like it is only one fungus species that infects only one ant species. There are many of these fungi and they infect many different kinds of insect, but somehow maintain a species specificity. In other words, fungus#1 can infect SpeciesX, but not SpeciesY, and Fungus#2 infects SpeciesY, but not SpeciesQ, and so forth.

So WHY does this happen? and HOW has no one looked at the brain cells of these ants?

Though no one has looked at the brains of these ants, Last year a paper painstakingly characterized their behavior under 'fungi control'. The most interesting characteristics are:

The ants display a 'drunkard's walk' (the author's words)

The ants periodically spasm and fall down (if they are above ground level)

The ants clamp down on the underside main vein of a leaf (never the side of the leaf, never the top) Interestingly they all bite down on the leaf around solar noon.

This figure shows the behavior of several ants. Each ant was observed during the time of the horizontal blue bar. The black vertical lines and 'spasms' which caused the ants to fall down (gray stars), and the red triangles are when the ant bit down on the leaf ridge.

Because we have no idea how the fungus is manipulating the ant, let's wildly speculate.

1. The Drunken Walk:Why: The reason for this is not clear. The ant doesn't go far, so the non-directional walking could be to keep it close to more ants.How: The mechanism is also not clear, but usually an ants directional walking could be following a pheromone trail. The fungus could presumably cause random walking by confusing the ants ability to sense pheromones. It could possibly even cause 'hallucinatory' pheromone sensing.

2. The Periodic Spasms: Why: The authors speculate that the purpose of these spasms is to keep the ant near the ground. The infect ants spend much more time on the ground level than the uninfected ants, and the spasms are often followed by a fall.How:A fungus could essentially cause a seizure in the ants brain by manipulating potassium or calcium channels. On the other hand, I suppose the fungus could be acting directly on the muscles, causing them to twitch in an uncontrolled way.

3. The Clamping: Why: This has an obvious function, to root the ant for ultimate fungal growth and dispersion. How: First of all, biting and even walking on leaves is not something these ants normally do. So the fungus isn't just hijacking a behavior that the ant already has, it's basically creating a new one. The correlation with solar noon indicates that a light or heat signal could contribute to the trigger, but basically nothing else is known about it. Interestingly, the clamping does not always have to be one single event either. A few of the ants clamped down on the leaf vein more than once. The authors of this paper spend time discussing fungi's direct effect on the mandible muscles of the ant.

Figure 3 Hughes et al., 2011

They show that the mandible muscles of the normal ant are fat and healthy (B), but the muscles of the infected ant are separated and reduced in size (C). Though this image is of an ant at the moment of biting, the authors suggests that the deterioration of the mandible muscle might be to prevent re-opening of the clamp. They do not speculate on how the clamp is initiated in the first place, or why it occurs at noon.

So please, fellow neuroscientists, somebody stain these brains! It's just too fascinating to resist exploration. What proteins are altered? What is the receptor composition of behaviorally-specific neurons? Are the dendrites differently shaped?
And who knows what sort of great advances might be hidden in these brain-controlling fungi. The magic of optogenetics comes from lowly light-sensitive bacteria, just think of the possibilities hidden in brain-controlling fungus.

To be fair, some neuroscience has been done on parasitic brain control, but it is very limited. In fact it is limited to basically one histological study about parasitic worms who infest crickets and cause them to drown themselves (the subject of a future blog post). However, suicide-crickets are no zombie-ants and the exact mechanisms of the interaction is not likely the same.

Tuesday, May 22, 2012

A computational model is a surrogate version of something usually made on a computer. An example that most people are familiar with are the computational models used to predict the weather. If you know how low pressure and high pressure fronts interact, and you know where one is and how fast it is moving, you can program software to play the situation out in a simulation, predicting what will happen and how quickly.

Computational neuroscience is more or less just like that and it can be used to investigate all levels of neuroscience. Here's a brief intro to three of the basic levels. There are other types of computational models in neuroscience, but these three make up most of them.

The Whole Brain
If you know how the thalamus, hippocampus, amygdala, and cortex all work together, you can simulate how inputs into one structure might influence the others. In this case each brain structure would basically be a 'black box' that received input and produced output based on known data. To do this kind of simulation you wouldn't actually simulate the millions of neurons in each structure.

On the next level down, you can make a computational model of a neural network inside a single brain structure. If you know the types of neurons in the amygdala and how they interact with each other, you can program those relationships in and test what might happen if one class of neurons fires too much or too little. You can test the effect removing one class of neurons has on the whole network and the output of that brain structure. In this case you are simulating individual neurons, but you are probably not simulating the details of the neurons, such as their dendrites and their specific channel composition. In this kind of computational model, the neurons are the 'black box' which receive input and produce output based on pre-set equations.

The Cellular Scale
One level down from this is a computational model of an individual neuron. In this type of model, the neuron is simulated in detail, with its dendrites, soma, and sometimes the axon. With this kind of model, you can test the effects of different dendrite shapes on the processing of the neuron. Usually the individual channels (such as calcium, potassium and sodium channels) in the neuron are programmed in and the electrical properties of the cells are calculated in detail. In this situation, the specific proteins and channels are the 'black boxes' computing ionic concentrations based on pre-set equations. A detailed tutorial on how to make a biophysically realistic model neuron can be found here.

a neuron can be simulated as a series of resistors and capacitors

Sidiropoulou et al., (2006) have an excellent review of the neuroscience discoveries that have been made with this cellular level of computational modeling.

"Understanding how the brain works remains one of the most exciting and intricate challenges of modern biology. Despite the wealth of information that has accumulated during the past years about the molecular and biophysical mechanisms that underlie neuronal activity, similar advances have yet to be made in understanding the rules that govern information processing and the relationship between the structure and function of a neuron." (Intro, Sidiropoulou et al., 2006) (red mine)

This paper directly argues against the idea that neurons are just 'on-off' switches, and illustrates the complex computational processes that occur in individual locations of the neuron. They cover computational studies analyzing the information processing that occurs in the dendrite, at the synapse, at the soma, and even in the axon. The details are to complicated to get into here, but the paper is free.

Finally, they end with a call to action for experimental and computational neuroscientists to work together to solve the really interesting problems in cellular neuroscience.

"The following open questions could provide fertile ground for collaborations among molecular biologists, geneticists, physiologists, modellers and behaviourists for further explorations of the mysteries of the brain. Do specific behaviours require certain neuronal computational tasks? Which parts of the neural circuit or the neuron itself are responsible for these tasks? What are the underlying molecular mechanisms for the distinct operating modes of neuronal integration? Such holistic approaches should lend support to the growing idea reinforced by this review: that something smaller than the cell lies at the heart of neural computation." (Discussion, Sidiropoulou et al., 2006)

Just as computational models can predict weather patterns with some degree of accuracy, no model is perfect. Similarly computational neuroscience is not going to lead to all the answers, but where it is particularly useful is in making very specific predictions about how certain aspects of a neuron or neural circuit might work. The insight gained from computational models can guide and focus experiments, making them more efficient. This saves time, money, energy, and animal lives.

Thursday, May 17, 2012

If I told you there was a special neuron that only Zebras had in their brains, what function would you predict this neuron to have?

I can think of a few:

1. Eating Grass
2. .....
3. ...

Ok, so I can only think of one.

It seem reasonable to assume that maybe this neuron has something to do with eating grass, and it seems reasonable to conduct experiments testing whether zebras who are bad at eating grass have fewer of these neurons and the like.

Now, what if I told you that new research has found that manatees also have these neurons!

baby animals are just so much cuter (source)
also, your animal cells are firing right now.

You might think 'ah, the noble manatee, cow of the sea, well that confirms it. These neurons must be for eating grass, because both zebras and manatees spend all their time eating some kind of grass.'

But wait, not so fast. Next, you found out that actually several animals have these neurons: elephants, whales, apes, the common human, and so forth.

Now what do you make of these neurons?

Humans don't spend much time eating grass...

Well this is basically the situation the Von Economo Neurons (VENs) are in, except reversed. They were first found in humans and great apes, leading to much speculation that these neurons were responsible for consciousness, self-awareness, and empathy.

But now it has become clear that manatees, elephants, whales, hippos, and 'the common zebra' also have these neurons.

I have been skeptical about VENs in the past, speculating that their unique shape might just be related to brain size, but a recent paper reveals a new development in VEN study that could prove me completely mostly wrong.

This new Neuron paper has found that VENs are also present in the brain of the macaque monkey.

Unless you are familiar with neuroscience research, this may not sound that exciting to you. It may sound like just another animal to add to the list reinforcing how 'unspecial' the VENs are.

However, if you are familiar with neuroscience research, you will realize the one thing that is different about macaque monkeys from every other animal currently on the VEN list.

Basically ,the difference is that you can implant electrodes in the macaque brain. Humans, hippos, gorillas, whales, elephants, zebras, and manatees are all (for either technical or ethical reasons) off-limits for this kind of neuroscience research. And as of now, electrode implantation is pretty much the only way to test whether specific neurons are active during a specific task. Other methods, such as fMRI can tell which area of the brain is active during a task, but cannot resolve which neuron or even which class of neuron is active at the time.

I look forward to studies investigating the physiological (rather than anatomical) properties of the VENs. Specifically I am eager to see studies directly investigating VEN activity during self-recognition or cognitive tasks.

If you want to read more about VENs, they have been covered quite nicely by The NeuroCritic.

Tuesday, May 15, 2012

This is an image of a piece of retina with neurons labeled with a specific marker (JAM-B). Notice something about them? Yep, their dendrites are all pointing downwards. If you are regular Cellular Scale reader, you will remember that dendrites take on many different shapes, and that these shapes often mean something with regard to the function of that cell.

A 2008 paper shows that this downward direction in these dendrites is no coincidence. These neurons are sensitive to visual input moving in a specific direction, the same direction that their dendrites are pointing. In other words, when a visual stimulation (such as a a bar or dot) is moving across the retina in the soma to dendrite direction, these neurons are most active. When the visual stimulation moves in the opposite direction, these neurons are the least active.

Figure 2e, Kim et al., 2008

This diagram shows the direction of the dendrites (green line), and the direction of movement which activates that neuron the most strongly (red line). This is just one example, but on average the dendrite direction and the preferred stimulus direction matched up for these neurons.
Because the lens of the eye functionally reverses the visual world, this means that since the dendrites of these neurons point down, the actually respond to upward motion.

A direct match between the shape of the dendritic tree and the function of these neurons is a huge step toward understanding the way that Form and Function influence each other in the brain. The authors end this paper by asking

"One outstanding question is why the mouse has invested so heavily in sensitivity to upward motion." (Kim et al., 2008)

This could lead to speculation on mouse evolution and why a mouse would need to be extra-sensitive to upward visual input. But I think that is a goose chase, just because there are cells in the retina whose form and function match nicely for upward motion, doesn't mean that mice are actually more sensitive to upward motion. (A motion detection behavioral test is necessary to make that claim, and I don't know of any done on mice)

In fact, the same group more recently (Kay et al., 2011) found that there are four similar classes of cell responsive to each of the four cardinal directions. These cells have some dendrite-direction correlation, but it is not as strong and clear cut as the upward sensitive cells specified in the 2008 paper.

What is particularly interesting is that, while the (J-RGCs) cells in the 2008 paper have such strong correlations between dendrite direction and stimulus direction sensitivity, the cells (BD-RGCs) described in the 2011 paper do not. From the discussion:

"The correspondence of dendritic asymmetry with preferred movement direction in BD-RGCs resembles that in J-RGCs, a far more strikingly asymmetric group of OFF-DSGCs that we described recently (Kim et al., 2008, 2010). We suspect, however, that the association differs in the two cases. Both J- and BD-RGCs include some cells whose arbors appear symmetric. The symmetric J-RGCs are not direction selective, supporting the idea that structure underlies function for these cells (Kim et al., 2008). In
contrast, structurally symmetric BD-RGCs are as direction selective as asymmetric ones, suggesting that for these cells structural asymmetry does not determine directional preference." (Kay et al., 2011)

In other words: Some cells are direction-sensitive without their dendrites being weighted to one side.

So the really exciting questions are: What are the molecular and cellular mechanisms that make these cells directionally sensitive, and is the dendritic orientation necessary for direction sensitivity? If an upward motion cell was somehow transplanted in the opposite orientation, would it become a downward motion cell?

I suspect that just as computational neuroscience helped us understand the dendrite-based frequency sensitivity in the bird brain, a computational model would help us understand how a cell could respond maximally to soma-dendrite directional motion.

Saturday, May 12, 2012

Here is a worldview that I would advocate all smart women taking, if only for a little while.

It's not you, it's them.﻿

Now this attitude can become problematic when taken to its extreme, but I have met sooooo many more women who would benefit from adopting this viewpoint than women who go too far in the egomaniac department.

Here are some test situations to see if you need this attitude adjustment.

Situation 1:

You are a post-doc and you are talking to a PI (not your own), and they say something like "well, how does that compare to So and So et al., 2005?"

You don't know that paper.

Do you

A. Think Oh my god, this PI is going to think I am an idiot for not knowing this paper.

or

B. Think What a dork for quoting the name and date of a paper rather than the main point, who does that?

(No matter what your internal dialogue, the correct out-loud response is "In what respect?" or "What did they say in that paper?")

So if you picked A, you would probably benefit from taking the "It's not you, it's them" viewpoint. If you picked B, good work, you already adhere to this worldview (But pay close attention in Situation 3).

Situation 2:

You are sitting in a talk given by Dr. Bigshot. You are paying attention to the talk, and you are unclear about a graph shown on the screen.

I think you can see where this is going. Is it YOU or is it THEM? Unless you were sleeping during this talk, the answer is THEM.

Here's the final, slightly more tricky situation.

You are in a that same talk, you ask a question and Bigshot answers with "I showed that on slide 3" goes back to it and, sure enough, exactly what you just asked was on slide 3.

Do you:

A. Think oops, how embarrassing.

or

B. Think oh my god, I can't believe I just asked Bigshot such a stupid question. I don't belong here, I am too much of an idiot to be a scientist. Now everyone will see that I am an impostor.

or

C. Think Bigshot is an idiot.

In this situation the correct answer is A. If you ask something dumb, you ask something dumb and that's all there is to it. It doesn't mean you are completely stupid, it just means you dazed out for a minute. You should be embarrassed, but not crushed (B). Learn from the experience and pay more attention next time, but realize that everyone makes mistakes. In this situation answer C is an example of going too far with "It's not you it's them" mentality. You have to realize when you've genuinely made a mistake. (Note that not knowing a paper by the author and date, or not knowing what an unexplained or unlabeled graph means are not mistakes.)

This doesn't just apply to science either. If someone acts like you are stupid for not knowing the actor in a movie, or when such and such song came out, or anything like that. Just remember, THEY are the stupid ones for assuming you (or anyone) should know that.

Tuesday, May 8, 2012

The brain is misunderstood. The media and public generally do not understand even the very basics of neuroscience. Can your average person on the street tell you what a neuron is, draw one, explain how it is different from a blood cell? Perhaps more importantly, can the people who are prone to saying "slap to the cerebellum" and "men use different parts of their brains than women" actually tell you what the cerebellum does or what an fMRI actually measures?

I understand that you can lament anything as not being well enough understood by the public. The laws of physics, the rules of grammar, the history of ancient civilizations, and how to spell are all things that cause experts to sigh and shake their heads about the lack of knowledge in the general public. However, all these things are taught in school, while neuroscience is not.

Why Neuroscience should be Taught in School

Neuroscience is sort of where genetics was 20-30 years ago: The
scientific frontier, fascinating to the public, changing the general
worldview, raising ethical questions, science fiction's closest
reflection in reality. This has its benefits and its downfalls. There
is currently strong general enthusiasm for neuroscience for just these
reasons, but because everything 'neuro' is so exciting, the risk of
media misrepresentation is high and the misuse of neuroscience concepts
and terms by pseudo-science is common.

If basic neuroscience was taught in schools, and the general public understood how neurons use ions to send and receive information, they would not be (as) likely to buy crystals whose vibrations will 'resonate with your neurons'.
If basic neuroscience was taught in schools, and the general public understood how an fMRI scan measured the difference in blood-oxygen content throughout the brain, they would be less likely to take a cleaned up brain scan image at face value.
If basic neuroscience was taught in schools, and the general public understood the sensory inputs to the brain, they would be less taken in by products that promise to 'stimulate your auditory nerve*' or 'stimulate both brain hemispheres**'.

*Anything that makes noise stimulates your auditory nerve (Example from Cordelia Fine)
**If one eye is open, both sides of your brain are being stimulated

How Neuroscience should be Taught in School

One of the limitations to teaching neuroscience in schools is the cost. A set up for a neurophysiology experiment can cost thousands of dollars. Backyard Brains, a company which I have blogged about before,
has recently published an open access paper describing the use of their spikerbox in a high school environment.

The students assembled their own spikerboxes and conducted experiments to answer questions like:

How does your brain tell your muscles to move?
and
How do neurons generate electricity?

The students generated interesting questions themselves beyond the ones pre-determined for the experiments. In the discussion, Marzullo and Gage state "K-12 students are capable of not just following experimental protocols, but also participating in scientific discovery."

In conclusion, I hope that this investigative, experiment-based approach to teaching neuroscience in schools will take off and that the basic principles of neuroscience will become common knowledge. Will this stop people from believing in unrealistic miracle treatments? Of course not, but it may inhibit people from casually and incorrectly throwing around neurojargon to sell a product.

They can use all sorts of specialty inks. So if you want a set of shirts for your lab with a gfp-style glow in the dark pyramidal neuron on it, they can do it. If you want a shirt with a brain on it that says "neuroscience"...done! They made me one (see above), and you could contact them to order one yourself if you want. They can even do high-resolution color blending, so if you've always wanted some brainbow shirts, they could do that too.

Tuesday, May 1, 2012

Using a little optogenetic trickery, you can directly activate specific worm neurons with light. If you know your worm neurons, you can stimulate ones that make it think it has suddenly touched something with its nose or that the environment is suddenly very salty.

Before we dive into worm VR, let's back up and discuss this specific worm.

The Magnificent C. Elegans
C. Elegans is a surprisingly popular subject of study in neuroscience. It has a simple and well defined nervous system that contains only 302 neurons (in the hermaphrodite, the rare males have a few extra neurons). All the neurons and even all the connections between the neurons have been pretty well characterized. They are small (hundreds can fit on a standard sized petri dish) and they reproduce quickly. And it that wasn't enough to make C. elegans a desirable subject for study, they can be genetically altered with relative ease, and exhibit rudimentary learning skills.

A recent technological development has made clever use of genetic tools that allow calcium influx (an indicator of neural activity) to be visualized in neurons and allow neurons to be activated by light. Faumont et al., (2011) have created a worm tracking system that uses the fluorescence from a genetically altered neuron to locate the worm and recenter the microscope on the worm in real time. This allows for completely non-invasive visualization of neuronal calcium/activity in the awake behaving animal.

The recent paper in PLoS One, describes exactly how they got the microscope to track the worm in real time without blurring of the signal or messing up the calcium imaging. The paper is open access, so you can go read the details for free.

To see this larger and more clearly, you can download this video and their 4 other supplementary videos here.
In this video, you can see the animal moving around in the top left, the path it follows in the top right, the calcium fluorescence signal in the bottom left (notice the calcium neuron is always in the field of view), and the activity of this particular neuron when the worm is traveling either forward (blue) or backward (red).

The "Dedicated Circuit" Hypothesis
The neuron imaged in this video is called AVB, and it is a 'command neuron'. Faumont et al. show that it increases in activity when the worm is moving forward and decreases when the worm moves backwards. A similar command neuron, AVA, does just the opposite, increasing when the worm moves backward and decreasing when it moves forward. These data support what is called the "dedicated circuit hypothesis" which says that the worm uses one set of neurons to go forward and a completely different set of neurons to move backwards.

While Faumont et al. shows that the dedicated circuit hypothesis is supported for command neurons, they find that the activity of the actual motor neurons (the neurons on the body wall that control contraction of the muscles) does not support this hypothesis. If the dedicated circuit hypothesis was true, the A-type motor neurons should only be active and oscillating during backward movement, and the B-type motor neurons should only be active during forward movement. They found that this wasn't true, that both were active and oscillating during both forward and backward motion.

Virtual Reality for Worms
Now back to virtual reality. This Faumont et al. paper is a showcase of new tools that can be used to study C. Elegans in a simultaneously macroscopic and microscopic way. One of the new techniques the introduce is the optogenetic stimulation of specific neurons in specific places to create and 'environment' for the worm.

Faumont et al., 2011 Figure 2

When they genetically express channel rhodopsin, the channel which activates neurons when exposed to blue light in the ASH neuron (a neuron sensitive to osmolarity, or saltiness, changes), they can activate that neuron whenever they want by turning on the blue light. They create a virtual environment by tracking the worm as it travels in a field, and activating the blue light when it reaches a certain xy coordinate. In the figure above they activate the neuron when the worm's nose is within the outer ring (traces turn blue). This makes the worm 'think' that the ring is full of saltier liquid than the rest of the area.

This virtual environment takes away all the technical difficulties of actually creating a ring of salty water in a pool of less salty water, and the VR environment can be quickly and easily changed into any shape or size, when desired.
This new tracking method, in combination with calcium imaging and optogenetics, represents a leap forward in cellular scale neuroscience, to non-invasively visualize neuronal activity, activate neurons, and record the coinciding behavior is a combination mammalian neuroscientists can only dream about.