Monday, May 30, 2011

We discussed thought vs. intelligence, and had discussed lab rats, lassie and dolphins. But what about primates?

For intelligence purposes, we tend to divide primates into 3 groups: monkeys, great apes and humans. The human category is rather obvious, but to be explicit, it includes homo sapiens sapiens (modern man) as well as all of the obvious precursors: homo sapiens (Cro Magnon), homo neanderthalensis, homo habilis, etc. Great apes include chimpanzees, bonobos (also called pygmy chimpanzee), gorillas, orangutans (and sometimes includes gibbons). The human and great apes share characteristics of being large, tailless primates, capable of walking erect, live in family groups, are omnivorous, they are sexually dimorphic, with limited characteristics of reproductive *heat*, and 8-9 month pregnancy usually yielding one offspring.

Monkeys are basically everything else in the Primate taxonomical order. They are distinct from apes in size, gestation, walk on all fours instead of erect, and most have tails. New World Monkeys have prehensile tails (capable of grasping) while Old World Monkey tails are not prehensile.

Are monkeys intelligent? Certainly they seem to be. They can solve puzzles, use tools and operate on memory. Many behaviors are *emergent* (they "emerge" during experience) and do not *need* operant training to learn a task - many times they can learn the task simply by watching another monkey do it. Do they think? Well, the problem is that we really have no way of knowing.

But with the great apes, we do have a way of knowing if they think. The Great Ape Project, the work of Jane Goodall, Francine Patterson, Beatrix T. and R. Allen Gardner, and of David Premack have all given us a glimpse of language use by the Chimpanzee Washoe, Gorillas Koko (Patterson, F.G.; Linden, E. 1981, The education of Koko, New York: Holt, Rinehart and Winston), and chimpanzees Sarah and Nim (Premack, David & Premack, Ann James. 1983, The Mind of an Ape). With the help of American Sign Language, and other symbolic tools, these Great Apes can communicate back with the scientists that study them.

Such communication clearly demonstrates a sense of self-identity, motivation, awareness of surroundings, wants, needs and even emotions. However, critics of assigning "thought" to nonhuman primates point out that the Great Apes learn their language very slowly, and many not at all. In addition, a key lack in the communication is that the subjects do not initiate their own interrogatives - in other words, they do not ask questions - as any human two-year-old might. Supporters attribute these shortcomings to problems with applying human standards to nonhumans.

On the other hand, what *other* standards do we have to apply?

It's a tricky question, and one not easily answered in psychological terms, but we do have recourse to a brain science answer. The key differences in human and Great Ape brain vs monkey brain is the size and development of the Frontal Lobe (prefrontal areas) and Temporal Lobe. Thus the brain areas responsible for executive function, memory and many of the processes we believe underlie thought, are more highly developed in humans and our near cousins, the Great Apes. Therefore, if any nonhuman species are likely to "think" it would be among the apes.

This now leads us into the realm of consciousness and sentience, which we will cover next in The Lab Rats' Guide to the Brain!

Saturday, May 28, 2011

Tonight's blog will be brief due to other commitments, thus the current topic - thought vs. intelligence, will span two successive blogs.

Extremely intangible concepts, we assess thought and intelligence primarily by their differences: thought is largely a function of the executive information processing by frontal lobe, while what we measure as *intelligence*is largely a function of the association and memory.

For example, we clearly understand that humans have both the capacity for thought, as well as for intelligence, but what of animals? For the moment we skip over humanity's closest cousins, the Great Apes - and start with lab rats.

All kidding aside, rats show limited signs of intelligence: they can clearly solve puzzles on the basis of memory, but do they have thoughts? Probably not, and their intelligence is again, largely a result of memory and what is called operant training.

Operant training is how rats run mazes, dogs and horses learn tricks, and those sea lions and otters in the Seaworld show can do all of the tricks that *appear* to represent intelligence. First, you train an animal that a particular sound or light means a reward. The term for this is "magazine" training (for the food dispenser magazine in the famous "skinner box"). Next, the animal is trained that they must perform a particular response, to get the click that means reward. Next the animal learns to perform yet another response, to get the signal for the first response, to get the click of the magazine which means reward. You can keep this up - a process called "operant chaining" - until the animal can smoothly run up a ramp, ring a bell, run down steps, jump through a window, and pull a lost child from an abandoned well ("Well, done, Lassie!").

Intelligence? yes, of a limited sort. Thought? No. In fact, we should be cautioned *not* to project human thoughts and motivations onto animals - after all, rats and mice can learn. ants and honeybees cooperate, and many animals form family groups. Behaviors do not equate to thoughts.

So, how about dolphins? They are intelligent, right? yes. They show the ability to learn - really fast, and they *initiate* behaviors that are not necessarily trained. They also show evidence of having a language. So do they *think*? Maybe. They appear to have behaviors that are not strictly motivated by their own needs, and behave toward each other in manners reminiscent of how humans act. But again, be wary of projecting human patterns onto animals. We don't *know* what or how dolphins think, and won't until we can find a way to let them tell us!

That brings us to primates - monkeys, great apes and humans - and here I'm going to cheat. IN honor of the May television season, I leave you with a cliff-hanger:

Thursday, May 26, 2011

Approaching this blog, I am reminded of the song in Les Miserables (musical) with the same name as our title. The protagonist, Jean Valjean debates revealing his identity, thereby saving an innocent man, but condemning himself to prison. Questions of slef-identity, like conscience and moral decision, are part of the nebulous properties of "the mind"and are more appropriately in the realm of psychology than neuroscience.

Yet we know that developing human children go through phases of discovering self vs. external world, without the cliche of the teenage who leaves home to "discover himself." A toddler does not *like* the game of peek-a-boo at first, because they have not yet learned that objects don't disappear when the eyes are closed. Later they treat it as a game, discovering that yes, they can close eyes or hide, and the external world remains the same. Newborns are *all* self, and children do not truly learn that there are others that are "not self" until well after they discover language skills. At least one theory of child development attributes this to the learning of symbols - newborn brains process only raw data, and it is not until they learn to group data into symbolic representation that children are capable of learning language, music, art, and to share their crayons.

This skill is associated with brain development in two areas - prefrontal cortex of the Frotnal lobe, and the hippocampus and surrounding parahippocampal areas of the Temporal Lobe. The role of the former is perhaps obvious: Frontal lobe is necessary for executive function, for decision-making, and for abstract thought. Orbitofrontal cortex is involved in prediction and correction (by means of assessing the accuracy of prediction) of behavior, so it likewise is important to development of a symbology of self and nonself. From this you could surmise that hippocampus and amygdala, with their roles in memory processing and assessment of expected vs. actual outcome, would interact with prefrontal areas to develop a "history" of what objects and actions represent consequences of one's own actions, and which outcomes are the result of external factors - and you'd be partially correct.

To gain the other part, we need to examine a lab rat. Well, actually, a LabRat. Up there to the right is Ratface, one of my (fictional) intelligent LabRats that assist me in teaching and writing. Well, the intelligence part is debatable, as you can see, he's always getting in trouble, stuck, trapped, mistaken for a paperweight or even a computer mouse. Ratface has been a laboratory subject for many years, participating in a lot of pharmacological studies of cognitive enhancers, hallucinogens, psychedelics, euphorics and stimulants and is sort of an example for the new public service announcement: "This is your rat on drugs." Yet Ratface's hippocampus does something amazing - it creates and maintains a map of the environment. OK, maybe not unique - since all mammals do so, as well as some birds, reptiles and even a few insects - still the hippocampal map is amazing.

Neurons in hippocampus are content to fire slowly (if at all) most of the time, but some greatly increase their firing when the Ratface is in a very specific place in his surroundings. Neighboring neurons do not necessarily fire in adjacent positions, and the firing can be highly dependent on the context, but the result is a complete map of the surrounding space in what we call a "sparse, distributed network" (meaning it uses a few, widely scattered neurons, and it requires a specific set of connections - a "wiring diagram" if you will - to read it). Neuroscientists have found that the hippocampal mapping system is actually two maps, and it requires lots of sensory input to make it work: vision, hearing, smell, touch, proprioception, and memory of prior movements. The two maps are what are important to today's blog about self: One map is centered on "self" and moves with Ratface through his environment - we call this the "egocentric" map. The other map is structured on the relationship of external objects to each other, and remains fixed no matter where Ratface moves within that environment - this is the "allocentric" map. In the laboratory we can also see that the egocentric map is active no matter what environment Ratface enters (or leaves) and the allocentric map is inactive when Ratface leaves the room, but instantly reactivated. when he returns.

Again this is probably tied to the ability of the brain to handle symbols instead of raw sensory input - the map and all of hippocampal function relies on associations or relationships between places, events and time. Still, the division of the hippocampal map into self-centered and non-self-centered coordinates is an important means by which Ratface can assert that he most definitely is not a mouse - even if he does sometimes get his tail caught in the mousetraps.

So, "self" can be defined in abstract (prefrontal) and physical (hippocampal) terms, and contribute to the sense of identity. As for those folks out wandering the globe trying to "find themselves" - perhaps they just need a map.

Tuesday, May 24, 2011

This blog, and the series to immediately follow, are really about psychology. Psychology studies the mind more so than the brain. Since I am a neuroscientist and not a psychologist, I will keep the speculation to a minumum and just talk about what we (as neuroscientists) know about the brain areas serving these functions.

To foreshadow the coming posts, we have personality, sense of identity, thought vs. intelligence, consciousness and sentience. These subjects take on more of a metaphysical nature because it is difficult to point to a neuron and specify that the activity is part of a "thought." On the contrary, it is relatively easy to identify neural activity associated with a specific sensory input or motor output.

Most of what we know about "personality," for instance, comes for examining how a person's personality changes after damage to the brain. Here again, though, we need to be able to separate the psychological (changes due to the patient's personal motivation to make a change) from the physical (changes due to actual brain structure effects).

The most prominent example of personality, and the supposition that the Frontal Lobe controls personality, comes from the case of Phineas Gage. Gage was a railroad foreman in the early 19th century. An equipment explosion sent a 3-foot iron rod sailing, and Gage was in its path. Several reconstructions of the result are at the right. Miraculously, Gage survived. He lost an eye, but motor function and memory seemed intact. However, over time, the friendly, outgoing, dependable Gage became angry, bitter, foul-mouthed and unreliable.

With the perspective of nearly two centuries, we now know that there are many "centers" of personality in the brain. Prefrontal and upper Parietal Lobe are key areas, although this study claims there are some key differences in size of brain areas: http://www.sciencedaily.com/releases/2010/06/100622142601.htm. It should be noted, however, that we should not be quick to specify cause and effect. We have learned that the brain is highly plastic: persons with brain injury can regain some or nearly all function over time with effort. Thus it cannot be conclusively stated whether the enlarged brain areas cause the behavior, or are the result of increased activity because of the behavior.

We also know that personality is also dependent on neurochemicals. The classic "manic-depressive" state (more appropriately "Bipolar Affective Disorder") results in patients alternating between extremes of extroversion to introversion, friendly to angry, compulsive to lethargic. In common understanding, this would be considered a personality change, and when on appropriate medication, the baseline personality is somewhere in the middle between the extremes. The change in personality is frequently associated with the balance between activity of neurons that utilize different neurotransmitters: Serotonin is most important, but norepinephrine, dopamine and acetylcholine are also involved. Frequently the location of these neurons is in the basal ganglia and midbrain. Likewise, fear and anxiety are frequently associated with activity in the limbic system (although not exclusively), so the Frontal and Parietal Lobes are not the sole source of neural activity underlying personality.

As with so many other functions of the brain, personality likely results from interaction with many different brain areas. There are many similar conclusions when dealing with the "psychology" topics, thus the separate sciences of neuroscience/neurology and psychology/psychiatry which deal with the physical vs. mental aspects of an increasingly mysterious brain!

Sunday, May 22, 2011

In the previous blog, I discussed the brain areas that process motivation, and mentioned that they operate by assessing risk, reward, prediction and outcome. So how does that work?

First, neurons in the striatum (caudate, nucleus accumbens - NA, ventral tegmental area - VTA) fire at different rates when an subject performs different tasks for relative types of reward. Water/juice reward might invoke only a small response in a hungry subject, while generating a large response in a thirsty subject. Neurons in hippocampus encode information into, and recall from, memory regarding prior actions, memory of the task, and memory of the conditions leading to the current reward value. Amygdala, cingulate and orbitofrontal cortex are involved in prediction and matching that prediction to the real outcome. Neurons in VTA and substantia nigra (SN) are activated by "surprise" when the outcome does *not* match prediction, and this all feeds back into hippocampus, Parietal Lobe and Temporal Lobe to become the next iteration of task memory.

Next we will examine how reward *value* gets into the game, but it involves some of the psychological measurements that are commonly performed in lab test animals. However, modern noninvasive imaging techniques have shown that similar processes exist in humans, even though we do not and can not test in the same way.

But first, a slight side track on motivation...

Commonly abused drugs such as cocaine short-cut the whole process of reward assessment. Many of the neurons in the reward circuit either produce dopamine (VTA, NA, SN) or have receptors for dopamine inputs (amygdala, hippocampus, neocortex). A simple explanation of the action of many *stimulant* drugs (cocaine, methamphetamine, etc.) is that they directly affect dopamine neurotransmission by *prolonging* it - by blocking metabolism or reuptake back into the neurons to break it down. We now know that there is more to the mechanism, but suffice it to say that by acting directly on the neurons, instead of through the normal sensory inputs, drugs "take over" the reward pathway and produce the maximum *value* reward compared to any other outcome.

"Value" of a reward is relative, of course. Water has low value to a hungry subject, but high value to a thirsty subject. Fear of shock, pain or anxiety can have a higher value than thirst, until the subject gets thirsty enough. In psychology studies, researchers often look for whether the *work* or effort a subject puts forth is greater or less than the reward. A mouse may run a maze for cheese, but not water. A lab rat may press a lever 20 times for water, but not 21 times; may press only 10 times for food, but over 100 times for cocaine. Likewise a rat may risk stepping onto a grid that produces a mild shock to the feet in order to escape a brightly lit chamber and retreat to a dark, covered corner.

An interesting task given to human subjects is called the "Gambling Task" (or "IOWA Gambling Task" since it was developed at the Univ. of Iowa). Subjects can draw a card from any one of four decks. From any deck, they may draw a win or loss - two decks have small wins, but also small losses - the other decks have large wins, but even larger losses. The strategy that a subject uses to choose cards - slow and steady win, big win / big loss, or a mix - tells a lot about how that person assesses risk and reward. It is telling that persons who are big risk-takers - as well as drug abusers - tend to go for the chance of a big win, even though the big loss is even more likely. They also tend to choices that are impulsive, rather than informed by history and calculation of likelihood of reward.

Sop motivation is based on many things - the sensory information that yields information about reward value, predcition / expectation of outcome and the historical confirmation or denial of that expectation, the risk-reward assessment, and directly manipulation of the neurons that serve the motivational circuit. I don't want to downplay the role of informed decision-making and executive function, but that is material for the next few blogs on identity, personality, and thought.

So until next time, take care of your brain, and don't let it get "psyched out."

Friday, May 20, 2011

I will tackle the subject of motivation in two parts: Today's blog is about the brain structures which are active to provide information which is utilized as motivation. The second blog will discuss the psychology of motivation as it affects behavior.

The primary motivation for any behavior is "what do I gain?" No big surprise, but there is a series of brain areas that form a circuit to assess risk, reward, gain, and the value of such gain. We briefly touched on this in yesterday's discussion of amygdala and cingulate cortex. However, the "brain reward circuit" as we will discuss today is much more complex. The diagram above right by Prof. George Koob shows the circuit as a network diagram. Another view is superimposed on the brain below left.

Right in the center of the diagram is the "Nucleus accumbens" part of the striatum (caudate/putamen) that surrounds the thalamus at the base of the brain. Amygdala is a major component, as is hippocampus. BNST refers to the "Bed Nuclei of the Stria Terminalis" Stria terminalis connects the amygdala to thalamic and basal ganglia structures. The "Bed Nuclei" simply refer to the nuclei of origination for the structure. BNST, nucleus accumbens and ventral tegmental area (VTA) are part of the basal ganglia as described in a previous blog.
We can see that the circuit includes hippocampus, cingulate cortex and prefrontal cortex, implying a role for memory and executive decision-making. Orbitofrontal cortex is the particular Frontal Lobe area that matches decision-making with expectation.

The reward circuit thus serves two main functions - prediction of expected outcomes or "Reward", and the relative value of that reward in terms of the behavior, or "Risk." In this manner, the "reward" circuit is more appropriately termed a "Risk-Reward" circuit that predicts the payoff for a particular action in light of what kind or how much effort is required to gain that reward. Whether the effort is "worth it" is a matter of motivation.

In animal experimentation, it is common to offer the animal one of two "payoffs" for behavior - press lever A and get fruit juice, press lever B and get a fruit-flavored food pellet. The risk is low, reward value typically depends on whether the animal is hungry or thirsty. "Risk" comes in if the animal has to perform an action that might involve pain (crossing over a high narrow platform with no walls, or a floor that shocks the feet), while reward value can be conveyed by sensory information (replace the food pellet with cocaine injection, and a very thirsty rat may still choose drug).

Nucleus accumbens is involved in prediction, VTA in evaluating probability and surprise events, hippocmapus in memory of previous choices, amygdala in balancing the emotional content and past vs. future predictions, cingulate in decision between alternatives, and the prefrontal and orbitofrontal cortex provide the decision as well as updating memory and comparing the prediction to the actual outcome. The complete circuit is involved in providing the *motivation* for actions in which a risk-reward assessment can be made.

Such risk-reward assessments also include the judgment of whether the reward is worth the effort to be expended. In the case of the rat at right, navigating the maze just was too much work to justify the reward gained by the cheese at the end, and an alternate pathway was found.

But all of that borders on the psychology of motivation, so we will table that until the next blog!

Thursday, May 19, 2011

Just came across something that just *begged* me to discuss it in this column. It involves amygdala and motivation, so it fits in nicely with the current topic list. I promised myself when I started this blog not to get political, but this is a case in which scientific findings have been highly politicized and overinterpreted by the Press. I will present the case, the *common* interpretation that shows up in the news articles if you do a Google search on the topic, some additional scientific facts, and then how you could twist the results to support the opposite political position.

I present this not to be political, but to point out the dangers in politicizing the interpretation. Do I actually believe *either* interpretation? Actually, no, I take a more centrist view and I claim that we need to consider the results in the much broader view of total information from the field of Neuroscience.

One last warning: This blog is full of links to scientific papers. I chose those rather than the news articles to *reduce* the political spin. My apologies, but it is possible that some subscriber links will not work, but I tried to stick to public searchable papers.

A controversial study about liberal vs. conservatives and cognitive ability is a brain scan study that showed *greater* activity in some of the “cognitive processing” parts of the brain for self-identified liberal vs. conservative students (http://www.nature.com/neuro/journal/v10/n10/abs/nn1979.html).

Now there is a new study (http://www.ncbi.nlm.nih.gov/pubmed/21474316) that demonstrates that the cingulate cortex is larger in self-identified liberals, and the amygdala is larger in conservatives. The cingulate cortex is an area of the brain that deals with mismatch and conflict. When a person is presented with conflicting information and must make a decision (i.e. the “moral dilemmas” that I have talked about in this forum before), the cingulate is highly active. The amygdala is notorious for being involved in fear reaction.

Naturally, the "spin" being applied is that liberals are able to better perceive complex, conflicting information, while conservative brains are dominated by fear.

Well, there are two problems with that interpretation. First is that the amygdala is not the “fear” engine that those "spinners" would have us believe. Amygdala processes information regarding fear, true. It also processes other emotional cues.

Let's turn that spin around, because we can know put forth a *new* interpretation of the results. Both liberals and conservatives have different brain areas active when processing inputs. In liberals, the cingulate is more active because it is constantly having to resolve conflicts between expectancy and actuality. In conservatives, the amygdala is more active because they predict outcome based on experience and memory, and not mere reward seeking.

Sure, it’s all a matter of spin, right?

Yeah, not so much. This blog is about Science and "Getting it right!" so let’s tackle the *second* problem I mentioned. Interpreters of the 2007 brain-scan activity study failed to take into the basic foundation of neuroscience: neurons that have to work harder develop more, stronger connections and thus show more activity in a brain scan. The basis for this is a theory of plasticity, the Hebb Rule and while we know now that it is not exact, it is a good working model for brain development, cognitive function and memory formation.

Thus a different interpretation of the 2007 study is “liberal brains work harder to processing the same information as conservative brains.” This could mean anything from more effort, more experinece or more conflicts between what they expect and what actually occurs. This actually ties in with other studies which show *abnormal* behavior associated with cingulate activity such as obsessive compulsive disorder, the fact that cingulate neurons are most activated when test conditions do *not* reflect reality (http://www.sciencemag.org/content/296/5573/1709.full) and to the expectation of pain (http://www.ncbi.nlm.nih.gov/pubmed/9721952?dopt=Abstract).

So what's *really* happening here? Are conservatives vs. liberals impaired, enhanced, do they use more or less of their brains?

No. They are different. They approach problems from a different perspective. Remember, amygdala and cingulate perform very similar functions: prediction of outcomes. The *purpose* behind such a prediction has to do with planning behaviors. The brain has a number of "pre-programmed" networks to accomplish most any task that may be required. Faster reaction time and minimum "work" are provided by predicting the most likely outcome, but the brain needs to compare the prediction with the actual outcome in order to better predict in the future. The difference in roles of amygdala and cingulate is what information is involved in the prediction/comparison of outcomes. If we use a simple semantic definition of "Conservative = a person most likely to choose an action based on preserving historical trends" vs. "Liberal = a person most likely to be willing to choose an action that may involve risk and definitely involves change", then by all means, they will each use a different area of the brain: Conservative will use the area tied closest to memory - i.e. the amygdala, while Liberal will use the area associated with risk-reward and changing behavior.

Different approach, yes. Different intelligence. No. Is one *better* than the other? not necessarily, just different.

So why did I get political? Because the truth is that this is an example of the dangers of politicizing, overinterpreting or even misinterpreting science. And finally, because The Lab Rats' Guide to the Brain is all about *Getting it right*!

Wednesday, May 18, 2011

Yes, I know I'm late - very late. I will try to make up time by getting the next several blogs up one each day.

This is the putative May 16th entry on Emotion. It is the first entry in the "Everything Else" section to the "Function" chapter of The Lab Rats Guide to the Brain. The next two entries will look at "Motivation" from two perspectives: the brain areas which actually provide function with respect to motivation, and the psychological aspects of motivation as it relates to memory and behavior.

In the upcoming sections we will touch a lot more on Psychology as a subset of Neuroscience, rather than the Anatomy and Physiology that has comprised many of the previous blogs. One reason is because many of the topics: Emotion, Motivation, Personality, Consciousness, Intelligence, and Thought have a central, unifying theme: associative memory.

I have mentioned before, and will do so many times in this section that "association" is the unique property of memory that allows us to recall specific information *without* requiring computer-like addressing. Memory is much less like a dictionary - a compendium of isolated facts - and much more like a thesaurus, in which looking up a single word yields many associated words with similar meaning. Thus a writer looking for just the right term is able to "associate" from one word through a chain that results in finding that one right word. Likewise, memory is a chain of associations - a scent invokes a memory of perfume, which recalls a former girlfriend, associated with a favorite song, heard on the radio of your old car, which was totaled when the brakes failed, which reminds you that the safety inspection is due on your current vehicle!
Likewise, emotion is not just one process, attributed to just one location in brain. Early studies revealed that damage to the amygdala, limbic system, frontal lobe or parietal lobe produced a change in personality - and most particularly emotion. While it is true that limbic/prefrontal damage tended to produce severe alteration in emotion "affect" (the emotional content present in facial expression, voice, language and physical interpretation), none of these is truly the *source* of emotion, but merely one of the processing areas.

True emotional processing seems to be in the Parietal Lobe, and as such is related to other sensory association areas of cortex. This leads to the supposition that emotion is in fact a product of associative memory combined with sensory input. In this manner, the primary "sensory" structure for emotion *is* the limbic system, in particular, the amygdala. Amygdala is arguably a component of the hippocampal system for processing of memory, however, what it processes is emotional *content*. It is highly activated by fear and stress; inactivation or damage to the amygdala results in either uncontrolled or absence of a normal fear-response. Likewise damage to other regions of the limbic system (i.e. Medial Septum, the dark spot in the middle above amygdala) can result in uncontrolled anger (or total passivity). However in these cases it appears that the culprit is damage to neurons which secrete neurochemicals that affect the rest of the limbic system and parietal lobe.

Other subjective emotions - like/dislike, love/hate - are largely processed by the Parietal and Frontal Lobes, and fall back into the category of Executive function as described in the March 14th blog. The "ability to feel" emotion is also an Executive Function" and yes, it is tightly tied to memory and association. The emotional deficits resulting from head injury and frontal lobotomy (technically "prefrontal lobotomy" since it disconnects the prefrontal cortex from the rest of the Frontal Lobe) result from the brain areas that serve the "conscious" functions of the brain.

But more on that later, when we discuss personality as part of the "Motivation," "Thought" and "Intelligence" sections of The Lab Rats' Guide to the Brain!

I *do* have new content and am two days behind getting it posted, due in part to momentum lost last week and having to deal with many layers of government bureaucracy in my day job that has sucked away all of my work and off-work time.

I promise to get the emotion and Motivation blogs completed and posted as soon as possible. Stay tuned tonight for new content.

Sunday, May 15, 2011

The previous blog concentrated on direct nervous control of the body. This blog will concentration on *indirect* control via chemicals released by the brain to affect targets in the rest of the body.

Most people should be familiar with the common hormones testosterone, estrogen, progesterone produced by testes and ovaries; adrenalin and cortisol produced by adrenal glands, thyroxin, produced by the thyroid gland, and growth hormone, secreted by the pituitary gland. With the exception of growth hormone, each of these hormones is produced outside the brain by specialized tissues that receive extensive blood and neural inputs. In adittion to the obvious effects (e.g. reproductive effects and secondary sex characteristics), these hormones have profound effects on muscle development, blood pressure, metabolism, fat deposition, and bone growth.

What is less obvious is that the production of each of these hormones is under strict control of the brain - both through direct neuron connections (i.e. to the adrenal gland), or indirectly through specialized "tropins" or regulatory hormones produced by tissues in the brain. The hypothalamus is as much a "gland" be definition, as it is a nucleus of the brain. Neurons in hypothalamus controls the autonomic (i.e. automatic or involuntary) functions of the nervous system and body, regulate homeostasis - body temperature, blood pressure, digestion, heart rate, respiration rate, etc. - and gives instructions to the pituitary gland (distinctly a *gland* and not a nucleus) through specific releasing hormones and inhibiting hormones. In this manner, it also controls the functions of all the other glands, directly or indirectly, of the body, forming what is officially termed the "endocrine" (hormonal) system.

The hypothalamus produces antidiuretic hormone (ADH) or vasopressin which regulates smooth muscle contraction around the blood vessels, and hence the blood pressure throughout the body. This is a very important function, since blood must be under pressure to penetrate the small capillaries that serve all tissues of the body. Another important, similar, hormine produced in hypothalamus is oxytocin, which produces contraction in smooth muscle of the mammaries and uterus, promoting lactation and inducing labor during pregnancy. Both hormones are produced by the hypothalamus, but stored and released by the pituitary. This is accomplished by neurons with cell bodies in the hypothalamus, and axon terminals which store and release the hormones onto blood vessels in the posterior pituitary in much the same manner as neurotransmitter release onto other neurons.

A series of "stimulating" or "releasing" hormones are produced and released in the anterior pituitary by neurons that receive inputs from other areas of hypothalamus. Prolactin (stimulates mammary glands), human growth hormone (affects bone and muscle growth), melanocyte stimulating hormone (affects skin color, and has some interesting interactions with the enkephalin/pain system), thyroid stimulating hormone (causes thyroid to produce thyroxin and thyrotropin), adrenocorticotropic hormone (causes the adrenal cortex to produce cortisol, one of the bodies primary stress-response hormones), follicle stimulating hormone (causes the development of follicles and ova in the ovaries and also promotes estrogen synthesis), and lutenizing hormone (causes ovulation, the release of ova from follicles, and transforms the follicle into a "corpus luteum" which produces progesterone).

The pineal gland is the other major gland of the brain which produces melatonin and releases it at night to assist in the sleep/wake cycle.

There are also lesser known neurotransmitters and neuromodulators produced in various areas of the brain which also have hormone-like functions both within the brain and throughout the body. For example: orexin, is a neurotransmitter produced in hypothalamus and a few other brain areas that regulates neural activity, promotes wakefulness, and is involved in feeding behavior. Ghrelin is a neuromodulator and hormone (growth hormone releasing) in the hypothalamus and is also produced by the stomach, associated with satiety (the full, satisfied feeling after a big meal). Cholecystokinin (CCK) is a minor neurotransmitter, that is also produced in the intestines and regulates smooth muscle contraction, fat uptake and gall bladder function. Norepinephrine is a major neurotransmitter that is structurally similar to adrenaline (epinephrine) and is involved in stress response.

Together, the thalamus and hypothalamus are intricately involved in control of the body - the thalamus (and midbrain and spinal cord) in the direct neural control, and the hypothalamus for endocrine or hormonal indirect control. Many systems of the body will certainly continue to function with inputs from the rest of the brain, but each of these systems has an override or regulatory function to control every aspect of the body.

Thus the brain really *is* in control of the whole body. So, don't let the *body* tell you hat to do. *You* are in control!

Saturday, May 14, 2011

[This is the May 12 post delayed by Blogspot maintenance issues – sorry for the delay]

Many of the entries in The Lab Rats Guide to the Brain deal with structure of the brain, and contain only limited information about interaction with the rest of the body. Sure, there's mention of motor cortex and sensory inputs, but nothing really about how the body is *controlled.* In fact, there is much more information about the inputs to the brain (senses) than the outputs.

So this post and the one to follow are about specifically about how the brain outputs commands and instructions to the body. Today's blog is about direct neural control – that is, neurons that project down to the spinal cord, out to various muscle and organs, and cause an action. Sunday's blog (May 15) will be about indirect control via hormones and neurotransmitter modulators.

The connections of brain to muscle should be reasonably obvious by now. First we have the "Motor Homunculus" (right) which shows how the motor cortex is organized with respect to outputs to the major muscle groups. In addition to the obvious muscle connections (which we will get back to in a minute), there are connections to the arteries, to regulate the diameter (and hence pressure) of the blood vessels. There are connections to the skin, to change the thickness, control sweat, control temperature and cause "piloerection" (goosebumps) which is a remnant of a reaction to raise hairs and increase the insulating properties of the fur which humans don't have! Less obvious are the connections to the organs to speed up or slow down activity, cause release of chemicals (hormones) and prepare the body for different conditions.

To list off the types of outputs from brain: we have (1) visceral control, (2) neuroregulatory control, (3) hormonal release control, (4) involuntary motor control, and (5) voluntary motor control. By visceral control (1), I mean direct control of the primary organs - for example, the heart will beat even without direct input from the brain- however, that input will cause the heart to slow down or speed up its rate. Likewise, the muscles that line blood vessels and the stomach & intestines (what we call "smooth muscle") have local circuits that control them, but inputs from the brain will change the speed and amount of contraction of the muscle, thus changing pressures and rate of flow (of blood or intestinal contents). Neuroregulatory control (2) is the control of the neural signals to and from the body. We all know of the ability to control or ignore certain sensory signals: pain, hunger, full bladder, etc. This ability is actually due to neural commands from the brain to the nuclei that relay signals to the brain. A certain amount of control of inputs (just like the control of eardrum or iris to limit visual or auditory range) is built into the sensory system to allow the brain to override signals when necessary. Hormonal release control (4) relates to the various *glands* throughout the brain and body, most notably the pineal, pituitary, adrenals, sweat, salivary and genitalia. These glands all release (or change) their chemical contents when stimulated by neural commands. Involuntary motor control comprises all of the reflexes and automatic functions controlled by the brain that a person does not have to think about. The heart beats, the diaphragm and ribcage create the breathing cycle, the stomach and intestines squeeze and relax to promote digestion, and muscles in the pelvic region tighten to limit the flow of fluids and solids when not needed. Virtually all of the neurons that control the above four functions originate in the mesencephalon (e.g. medulla) or lower (spinal cord or ganglia near the appropriate organs).

Finally, voluntary motor control (5) consists of all of the muscles that we can move just by thinking about them: Arms, legs, tongue, fingers, with additional voluntary overrides of involuntary functions (eyeblink, deep breath, bladder retention) etc. These functions (and neurons) are the ones that arise from the motor cortex.

To further understand how all of this works together, there is one further key piece of information: Muscles can only pull. They cannot push.

So how does limb movement work? Clearly there is some sort of a push-pull system going on?

Well, there is an oppositional system, but it is more appropriately a "pull-pull" system. Consider Da Vinci's famous "Vitruvian Man" drawing (left). This illustrates the best coordinate system for talking about anatomical positioning. In the drawing, all limbs are extended, the joints are straight, and the opposing muscle groups are at equal lengths. To bend an arm from this position, the elbow must be bent, changing the angle between upper and lower arms to something less than 180 degrees. This is accomplished by muscles contracting on the *inside* of the arm (inside the elbow joint). To re-straighten the arm, muscles on the outside of the arm contract, pulling the elbow and bones back into alignment.

We call the first action "flexion" - i.e. any muscle that reduces joint angle to less than 180 degrees is a "flexor". The straightening action is "extension" and the muscles are termed "extensors." To complete the process, there is a reflex "wired" through the spinal cord to relax extensors when the flexors contract, and vice versa. For circular or circumferential muscles, such as the iris of the eye, or sphincters (ureter and anus), the arrangement is a bit more complicated - there are circular muscles that surround the opening, and radial muscles at right angles. As shown in the illustration at the right, contracting the circular muscle closes the opening, contraction of the radial muscle opens it. Coordination of the opposing muscle groups is regulated by cerebellum and brainstem/spinal cord- so even for *voluntary* muscle movement, there is a certain measure of involuntary control.

Even where the brain has *delegated* certain aspects of control to local neural circuits, there is still a provision for descending control directly from brain. Sexual arousal is a perfect example, but so is anxiety, the "fight or flight reflex" and meditation. As we will see with tomorrows concluding blog, ultimately, the brain is in control of all processes of the body!

Sorry about that, but it means the May 12th post was absent and May 14th may be delayed. I will try to get those up this weekend and resume The Lab Rat's Guide to the Brain regular posting schedule next week.

Tuesday, May 10, 2011

[Today's regularly scheduled post was released a day early to tie-in with online discussions regarding 3-D and other visual effects used in filmmaking. See the "Too Much Stimulation" blog from May 9th for details. Therefore, today's post is largely administrative, and to provide a schedule for upcoming topics and events over the next two months.]

Due to numerous research grant application deadlines over the next few months, not to mention an extensive travel schedule for June, I have decided to map out the next two months in blogs.

The current chapter of The Lab Rats Guide to the Brain is focused on function. This outline (and dates of the respective blogs) of the section is as follows:

Right around the first of June will be when the first of the grant application deadlines hits, so Memory and Dreams will be recaps of material covered when discussing the Temporal Lobe and Hippocampus (Memory) and a recap of the "Dreams" posts from January/February with which I rebooted the blog.

I leave June 14th to start a 15-day trip that will take me from home to Los Angeles, CA, then off to Aberdeen, Scotland (my *second* home, so to speak) for a grant and scientific writing conference. On the return I will spend several days visiting London, Bath and Reading, England to see old friends, colleagues and (hopefully) future students. I return to the U.S. on June 28.

For the period that I will be out of town, I will serialize a light-hearted short story that I wrote involving a rather clueless academic who is challenged to write something that he just doesn't feel is very scientific. I do not feel that I can sustain the momentum to finish The Guide with a two week interruption, and thought I would present you with some original fiction from a scientists' perspective.

So, the serialized Original Short Fiction: "Blood Science"will appear in the blog from June 16-28

I may have some travelogue-style posts from the trip to appear in the blog either interspersed with the serial, or to appear on my return , June 29-30.

I am planning a wrap-up/recap of the Guide chapter on Brain Function - July 2nd.

July 4th will be special content, then the penultimate chapter of The Lab Rats Guide to the Brain: Diseases and Disorders will start on or after July 4th.

Thanks for reading, and stay tuned for more Lab Rats, more Brain, and more of the Guide!

Starting of the final chapter - Brain Diseases and Disorders - Coming July 2011

Monday, May 9, 2011

[I originally scheduled this blog for May 10th, but online discussion has prompted me to post early to tie in with the discussions referenced below. The regularly May 10th blog will therefore be a schedule update for the next two months.]

This blog is prompted by a recent discussion of 3D movies over at Howard Tayler's *excellent* webcomic Schlock Mercenary. The generic link for the webcomic is in the "Friends and Stuff I Really Like..." box in the lower left of this page. The link to the specific discussion is here: http://www.schlockmercenary.com/blog/thor-movie-review#disqus_thread

Howard Tayler posts reviews of movies he has viewed, and a common thread in the movies deals with the visual effects of 3-D and "shaky cam." The problems with these film-making techniques are numerous - yet, they can all be accounted for by examining how the visual system processes and interprets the information.

Referring to the diagram at right, we see the primary visual pathway, where visual information transfers from the retina to the Thalamus (lateral geniculate nucleus) to the primary visual cortex in the Occipital Lobe (by the way - three cheers to Howard Tayler - he is a writer who "gets it right!"). As an object moves across our field of view, clearly it will activate different regions of visual cortex, and hence the motion will be perceived in our visual processing. In fact, there is a controversial hypothesis that states that output from the visual cortical regions follows two pathways - one to the temporal lobe to be combined with memory and identify "what," and the other to parietal lobe to be combined with sensory and motion information and identify "where."

However, this "well-known" aspect of visual processing ignores an important, lower pathway by which visual information is processed. See the yellow "buttons" in the diagram labeled "Superior Colliculus?" Also, note that there is no indication of visual information projecting to the Cerebellum? Well, this often neglected pathway directly processes visual motion, and is necessary to keep the eyes pointed at moving objects - in other words, motion tracking. Neurons in lateral geniculate also project to Superior Colliculus (SC) and Cerebellum. Neurons in Cerebellum in turn project back to the SC and the Frontal Eye Fields in the Frontal Lobe. Frontal Eye Fields then project to the Occulomotor Nucleus which controls the Occulomotor nerve for control of eye position, and the Locus Coeruleus ("blue nucleus") and Edinger-Westphal Nucleus which affect the pupil and lens of the eye to regulate focus and amount of light reaching the retina. Superior Colliculus and Occulomotor nerve are part of what is termed the "optic tectum" in the mesencephalon, or midbrain.
So, what does this have to do with 3-D and "shaky cam?" Well, lets look at what also connects with the same midbrain areas:

The diagram at left shows the human brainstem, and immediately below the SC, we see the Inferior Colliculus (IC). IC serves a similar function to the SC, except for the auditory system. IC and several smaller nuclei referred to as the Superior and Inferior Olive - or Olivary Complex - participate in tracking moving sound. In particular, the Olivary complex compares the timing of sounds from both ears to localize the source of the sound from right to left. IC communicates with the vestibular ("Semicircular Canals" - balance system in the inner ear), and with the Cerebellum (as does the Olivary Complex) to coordinate motor commands to move the eyes to complete the tracking of a moving sound. But just as important are the nuclei located in the same part of the midbrain - the Raphe Nuclei. Neurons in these nuclei utilize Serotonin as a neurotransmitter. The nuclei have various functions in attention, circadian rhythm, motor coordination, overall brain activation, depression, pain sensation and hallucination. The dorsal Nucleus Linearis and Dorsa/Medial Raphe are sensitive to hallucinogens such as LSD. The medial nuclei in the Pons are connected to the cerebellum for motor learning and coordination, while the ventral nuclei in medulla regulate pain sensation, fever, respiration, heart rate and contractions of the stomach and intestines (related to nausea). There are interconnections between all of the areas - through the optic tectum, Cerebellum and Spinal Cord as well as direct interconnections.

Thus we see that the visual motion system is interconnected with systems that perceive balance and control the brains awareness of its position with respect to moving sights and sounds. It is indirectly connected with brain areas that regulate bodily functions such as pain sensation, heart rate and nausea. Thus what a "good" 3-D visual experience does is to provide what viewers would consider a more *visceral* experience - *precisely* because it activates additional brain systems that fool the body into experiencing more sensation than just the visual.

And herein lies the problem.

The mismatch between what the eyes see, the ears hear, and the feedback from the rest of the body is a problem for the brain. "Shaky cam" gives the sensation of normal body movement - as a person moves through their environment, the point of view continually moves - shakes, turns, rotates back-and-forth - but the brain DOES NOT PERCEIVE THAT MOTION! Proprioceptive feedback from the body, Cerebellar interactions with the occulomotor system, and the decades of learned experience lead the brain to perceive a steady, continuous view of the environment - in fact, *memory* of previous views allow us to have a perception of our entire surroundings, even when we aren't actually looking at them. "Shaky cam" violates that perception. We don't have the feedback from muscle movements to allow the brain to correct for the motion, and it makes us feel uncomfortable.

Sometimes that the point - "The Blair Witch Project" and "Cloverfield" used the "shaky cam" precisely to make the audience uncomfortable. However, that trick only works a few times, it is best used sparingly, and has no place in a movie with the intent to entertain without a brooding, disturbing background. Thus, any other use of "shaky cam" is just inappropriate and shoddy film-making. It leaves the audience with headaches and nausea, and -yes- feeling like they've just had a bad LSD trip. It is a gimmick, a trick used by producers and directors to be "trendy" but just reflects poorly on their own skills.

The visual system does not actually require 3-D effects (and really *cheesy* ones at that!) to perceive the three dimensional aspects of a scene. The visual association areas in the Parietal Lobe compare many aspects of a scene to provide knowledge of depth and perspective - for example, a small object, moving slowly across the scene is perceived as being further away than a large object moving quickly. Eclipsing of an object, where one object is covered by another, is perceived as the disappearing object being *behind* the still-visible object. Objects which get larger are perceived as coming toward the viewer, while objects getting smaller are perceived as moving away.

After all, stereoscopic vision relies on two eyes placed 7-8 centimeters apart. Once an object is more than 100 times that distance away (call it 10 meters distant), there is *very little* stereoscopic difference. Add to that the problem of corrective eyeglasses - correction of nearsightedness requires eyeglass lenses that make an object smaller in the course of correcting the focus. When a person has two eyes requiring different correction, the brain has to resolve two different sized images projected onto retina. Three-dimensional interpretation of our environment essentially requires a *lot* of processing involving visual and association cortex, the processing nuclei and Cerebellum - but mostly the visual association areas.

This is not to say that 3-D TV and movies don't have their place. There is an incredible feeling - again, a "visceral" one - of viewing a 3-D film. But in reality, that sensation is also gained by very high resolution, large-screen HDTVs and in IMAX theaters, so that the visual system can clearly see all the details that it uses to construct the three-dimensional perception. The feeling of immersion is assisted by 3-D audio effects as well. But over-use of 3-D - that is, over-stimulation by 3-D - results in headaches and nausea, precisely because it elicits that "gut-reaction" to the visual images. The aforementioned eyeglass wearers might *never* have a comfortable 3-D experience because of visual mismatch. Let us not forget that the mismatch also extends to the difference between the visual system and the body - with the visual and auditory system telling the brain that the body should be moving, at the same time that the body is telling the brain that it is *not* moving!

Use of 3-D - in the flying sequences of "How to Train Your Dragon" - yes, thumbs up! To "Toy Story," "Journey to the Center of the Earth," and "Thor" - no, thumbs down.

To directors and filmmakers: Stop using cheesy effects because you don't have the skills to make a better movie. *Learn* some brain science and GET IT RIGHT! and stop giving us headaches and making us sick to our stomachs!

Tune in to The Lab Rats' Guide to the Brain for more on how to "get it right" in brain science ... after all, it's not brain surgery...

Sunday, May 8, 2011

In the previous we discussed the "Five" senses: Sight, sound, touch, smell, taste - and then added some more sensory functions to the list. I argue that a *sixth* sense: "proprioception" should be added to the list, since it provides the essential knowledge about body condition and position to the brain - much more than just touch alone.

Then too, there are multiple subdivisions within the visual and auditory systems. We could easily argue that both auditory and visual information is further divided into simple (pitch or color), complex (harmony or shapes) and movement-related sensation. The supporting argument for such a division is in the different pathways each type of information takes into and through the brain. The counter argument is that auditory and visual information arise from just the one respective type of sensory neuron (cochlear hair cell or retinal photoreceptor) and the information gets combined in the association cortex for each respective "sense".

Then again we list taste and smell as two separate senses, when in some considerations they are a single sense. Both sensory neurons are chemoreceptors, and both require that the chemical being sensed is dissolved in liquid. Like an extreme dry mouth blunts the sense of taste, nasal dryness also blunts the sense of smell. The interior of the nose warms and humidifies the air we breathe not only to protect the sensitive airway, but also because it is the only way we can detect scents. Try this some time - sniff your favorite spice dry from the container, then place a very small amount onto paper, sniff again; then put that same small amount into boiling water and sniff the steam!

Still, taste and smell are justified as separate senses by their brain pathways - smell is conducted from the vomeronasal organ in the nasal passages via the Olfactory Nerve to the Olfactory Bulb and then to the Pyriform Cortex and Limbic structures of the diencephalon. Taste is conducted from the sensory neurons in the taste buds directly to the Thalamus by the Facial Nerve and Glossopharyngeal Nerve. From the Thalamus, Taste is represented in the Somatosensory Cortex in the Parietal Lobe. However, as discussed in the prior blog, with visual and auditory systems, divergent neural pathways have historically been no impediment to definition of a single sense.

Another curious fact about the senses of taste and smell comes from studies of persons who have lost their sense of smell - either through chemical means, disease or trauma. Most folks know that sinus congestion impairs the sense of smell and taste. However, in the absence of a sense of smell, other neurons can take over. The Trigeminal Nerve which connects to sensory neurons in the skin of the face can sense strong chemical odor. If you ever feel your cheeks burning from strong spice (Wasabi!) or peppers, you will know that your Trigeminal sensation is working well.

However, these chemical receptors cause a further problem for subdividing the sense of touch. The tactile sense is commonly considered to be the primary sense of majority of the body. But what of vibration, hot, cold or pain? The sensory neurons in joints and muscles also respond to vibration, thus vibration is common classed with proprioception rather than touch. The skin also contains many chemical sensitive neurons that react when the surrounding cells are damaged and release internal contents into the spaces between cells. In addition to specialized receptors for heat and cold, the chemoreceptors are responsible for the continuing "burning" sensation that follows *after* too much heat or cold has damaged the skin. The majority of "pain" sensation is in reality the activation of chemoreceptors following damage to the cells of the body - and in some cases, the neurons themselves are damaged and continue to send signals along pain pathways even though the source of the damage is long gone.

This leads to two last issues with tactile or body sensations:

The first is the mapping of sensation to regions of the skin. The technical term is "dermatomes" and these are the areas of skin served by a common nerve, or region of the spinal cord. Neurologists map dermatomes to localized nerve damage. Loss of sensation on a particular part of the body can be traced to a specific spinal nerve, region of the spinal cord, and ultimately to regions of Thalamus and Somatosensory cortex. Regions that require fine touch - fingertips, tongue, lips - have many very small dermatomes. Regions of the body that do not require a very precise localization of touch - chest, abdomen, back - have very few, large dermatomes. The "sensory homunculus" (right) emphasizes this different by the *size* of the represented area - indicating that more cortex (and hence more, smaller dermatomes) are occupied by certain body regions.

One particular dermatome that I have problems with stretches almost from the hip to the knee along the outside of my right leg. This large area is served by one nerve - the Lateral Subcutaneous Femoral Nerve - and because I damaged it many years ago, this entire region of my leg is plagued by numbness and occasional burning pain.

Which brings up the final point: Phantom pain. Just because a limb or dermatome is missing or has damage to the nerve does not mean that the brain just manages to ignore the connections where sensory information for that region should appear (see the "sensory homunculus", above). Damage to nerves is often felt as a severe burning pain. Even when the nerve is dead and gone, the neurons of the spinal cord, Thalamus and Somatosensory Cortex can still receive occasional random signals. When this occurs, the brain interprets the signal as activation of the respective sensory inputs - and in particular as pain - from the missing nerve or limb.

Even the internal organs, which don't have dermatomes and specific topographic mapping to the somatosensory cortex can be represented as pain on the surface of the body. Most people realize that both heart and stomach pain can be felt in the chest, kidney pain in the small of the back, etc. This is due to the fact that the nerves handling pain sensation from those organs enter the spinal cord at the same point as the respective dermatomes - leading to "referred pain" than is confusing to the patient, but well known to the neurologists and neuroscientists who study pain and sensation.

So the "five" senses aren't really five. There may be six if we count proprioception - or four if we combine taste and smell. However, if we count *all* of the subsets, I come up with 12! The brain is a complex creation, and we haven't even gotten to the "mythical sixth sense" of which so many writers are fond.

But we will! Rest assured, there is much more to come in The Lab Rats Guide to the Brain!

Over the past several months this blog has talked a lot about structure and function within the brain, with particular attention paid to how information gets into and out of the brain. The primary senses listed above are the major forms of input to the brain, but this list is somewhat deceptive in its simplicity.

We know from the multiple processing sites in the brain that vision and hearing are not uniform, monolithic inputs. We can subdivide the visual sense into color vs black & white, lines vs. shapes, moving vs. stationary objects. We are justified in thinking of each as a separate visual "senses" because the output from the retina goes to several different brain nuclei, and those separate lines do not necessarily converge again. For example, visual information goes from retinal ganglion to lateral geniculate (in the Thalamus) to V1 visual cortex (Occipital Lobe) to support the detection of lines and shapes. On the other hand, visual information goes from retinal ganglion to superior colliculus (in the midbrain) to cerebellum and back to Frontal Lobe for tracking of movement.

A similar subdivision can be made for auditory sense - single tones vs. harmony, melody, rhythm and stereo localization. As with the visual sense, information flows from cochlea to medial geniculate (Thalamus) to auditory cortex for sound identification, but it also travels from cochlea to inferior colliculus (midbrain) to cerebellum and back to Frontal Lobe for integration with vision to track moving objects. There are also brainstem structures that receive sensory information. The Superior Olive (visual) and Inferior Olive (auditory) in the Medulla compare the information received from the left-right visual or auditory fields to assist in determining location for *unmoving* objects.

And what about the sense of balance? Isn't that associated with the inner ear and the auditory system? Shouldn't it be a "sense" as well?

Well, No and Yes. While the semicircular canals of the inner ear share a structure with the cochlea, their functions are quite different. Many neuroscientists consider the sense of balance to be part of the sixth sensory classification called "proprioception." This is the sense of position that combines pressure on the soles of the feet, with angle information from the joints, stretch information from muscles and balance information to inform the brain about the position of the body. Anyone who has ever had to close their eyes and walk or touch their nose with an outstretched finger has used the proprioceptive sense.

Unlike visual and auditory systems in which one type of sensor (retina photoreceptors or cochlear hair cells) has its output divided into many functions, the proprioceptive sense is a combination from many different sensory neuron types that are combined into one sense. The semicircular canals rely on "otoliths" which sense movement and position changes of the head using a system very similar to the hair cells of the cochlea. pressure sensors in the feet use the same neurons as the sense of touch elsewhere - neurons that become more active when the cells are pushed, compressed or stretched - in other words, if the cell changes shape, it sounds off! The stretch receptors in the muscles and tendons are in fact specialized neurons similar to muscle cells that detect when a muscle is either actively or passively stretched.

Tactile sense comes from a vast number of these "mechanoreceptors" located in the skin, muscle and all of teh tissues in between. So in another sense, the "proprioceptive sense" could be considered just a subdivision of tactile sensation. However, there is more to the tactile sense than just "touch" as we will learn in the *next* blog - "The *Six* Senses (and maybe more)"

So tune in next Sunday for Taste, Smell and hot & cold running senses!

Wednesday, May 4, 2011

As promised in the last blog, this time we're going to talk about a less obvious form of neural code - oscillation.

Believe it or not, most people have heard of this type of encoding, they just don't realize what it is - some neurons in the brain seem to not have a specific encoding function, rather they fire at a continuous steady rate. Like a pacemaker, these neurons vary their firing rates only within very highly constrained circumstances, thus the neuron forms a sort of clock or pacemaker rhythm as a background to other types of encoding.

Such a rhythm is not a single fixed frequency, nor does any one neuron provide the entirety of the frequency - that is, not every neuron necessarily fires on each "peak" of the wave. Instead, many different neurons fire "entrained" to the rhythm, and the frequency is thus a product of many neurons firing, going silent, then firing over and over again. Most of the brain's intrinsic rhythms are derived from small bundles of neurons that "oscillate" at the basic frequency. For example, the first electroencephalographic (EEG) recording by Hans Berger in 1924 revealed a basic rhythm that he called "Alpha wave" (Top of figure at right). This rhythm was eventually found to originate in thalamic neurons.

Neuroscientists consider two oscillations to be "basic" to rhythms to the EEG - even though only one is a true "rhythm" - and these are termed the Alpha and Beta rhythm. Alpha is dominant at rest, when the eyes are closed, or the subject is in a restful, meditative state. Beta dominates when the eyes are open and the brain is actively involved in attention, and conscious thought. Beta is not a true rhythm in that it does not adhere to a strict oscillator, but is instead reflects the natural rate(s) at which neurons can fire when active. The Mu rhythm is essentially an Alpha-like rhythm that is recorded from motor cortex when the muscles are at rest, however it originates in motor cortex and is modulated more by cerebellum than thalamus.

Delta is the frequency most indicative of sleep and drowsiness. A subject with eyes closed but awake and thinking or meditating will show very little Delta. However, an awake but sleepy person will show intermittent bouts of large amplitude Delta waves in their EEG which represent "microsleep" episodes responsible for the "head nod" well-known to students in lectures! The "Power Spectrum" plot in the figure above represents analysis of EEG according to the underlying frequencies of the waveform (aka Fourier Analysis). In the sleep state (red), Delta dominates, and the faster frequencies (Beta and Gamma) are nonexistent (until REM sleep starts, that is). The awake subject shows more of the fast rhythms, and less of the slow rhythms.

Theta and Gamma are interesting rhythms in that these actually are oscillator rhythms that are essential to information encoding in the brain. Theta is a slow rhythm associated with movement, and found mostly in the deep nuclei, particularly the limbic system. Hippocampal Place Cells tend to fire in phase with Theta, and the *difference* in time between neuron firing and the peak of the Theta rhythm can be used to determine distance, speed and direction of movement. Theta appears to be essential to *sequences* of neural activity that form a map of the environment and track navigation through that environment. There is also some indication that it assists in timing and coordinating muscle movement.

Gamma rhythm is associated with the cortical regions of the brain. Neurons that are active at the same time in separate brain areas, but representing the same or related information, tend to fire with the same relationship to fast oscillations of typically 40 Hz. Local regions may exhibit a rhythm up to 100 Hz, but the common factor is that the Gamma rhythm ensures that neurons within a circuit are active at the same time. Neuroscientists feel that the Gamma rhythms serve as the "binding" or connection between neurons within a region, and between associated regions, in order to allow them to work on the same information - whether that is a cognitive (thought) process or planning and carrying out a motor movement.

So, in addition to the examples of Neural Coding presented in the last blog, we now see that each can exist in relationship to oscillations created by collections of neurons that fire with the same rhythm. The presence or absence of such rhythms, not to mention the timing relationships within the rhythm, form a powerful additional modulator of the information that a neural code can represent. To this we then add the topographical connections between neurons, the specificity of neurotransmitter and receptor combinations, and the thousands of connections that each neuron can make...

The net result is a "device" that is capable of representing so much more than "one's" and "zero's" and make the brain so much more than merely a collection of simple processing units!

Thus, despite the technological advance of putting more and more processors into a computer, we have *so* far to go before being able to model a mammalian brain with a computer.

Monday, May 2, 2011

In the past blogs we have discussed the electrical and chemical means by which neurons produce electrical activity which can signal information. Within a neuron, this signaling is electrical, between neurons it is chemical. Using these characteristics, plus the organization of neurons into different brain areas, neurotransmitters, circuits, networks, etc., the brain can potentially encode a *lot* of information.

Still a major question remains... "What is the information code?"

In fact, Drs. S. Deadwyler and R. Hampson asked that very question back in 1995 in the scientific journal "Science" (vol. 270, pg. 1316) regarding neural representation in the hippocampus in rats - known previously for representing information about an animal's location in space. Their results described a hippocampal code for information in time, particularly with relation to encoding an memory and later retrieval within a behavioral task.

It has long been known that there are different types of codes that neurons use to represent information. In the auditory and visual cortex, there is a *topographic* organization to the code. Neurons in the part of the retina or cochlea that respond to a specific stimulus (light angle or position or sound pitch) are "wired" to specific locations in the sensory cortex. Thus the "code" is in the neuron connections, and the neuron need only be active, or not, to represent its portion of the total information.

Within many neural structures, the neurons adopt a frequency code - the firing rate of the neuron represents a particular "value" of information. Stretch receptors in muscle or the proprioceptors in joints respond to the amount of stretch or degree of angle by increasing firing rate. Muscle activation in the form of neural outputs from the motor cortex most often use this type of code. The example in the upper right of the figure above shows an example of a joint receptor that increases firing rate as the angle of flex increases. By the way, these are examples of "rastergram/histogram" plots. The dots represent individual action potentials fired by a single neuron over the time or position axis at the bottom. Each row of dots (raster) represents a single trial, test or repetition of the stimulus. The bargraph (histogram) beneath is the sum or average of all of the repetitions above, and allows neuroscientists to examine the average ("mean") firing of a neuron over time and repetition. This is necessary because single neuron firing is subject to many variables, and is in fact a very chaotic or more appropriately a *nonlinear* system in which each firing is subject to many more variables than we typically observe or measure. Thus we first need to know the *mean* firing, then we can look at the individual variability.

This is quite apparent in the "On-Off" pattern at the upper left. Typical of a retinal ganglion neuron, you can see that the firing of the neuron is seldom fully *on* or *off*, but that there *is* a considerable difference between the two states. In the eye, a retinal ganglion cell will respond ("on") with action potential firing when a spot of light touches the photoreceptor neurons that it is "wired" to. When the light touches adjacent photoreceptor neurons, the activity of that retinal ganglion cell is suppressed, and is in the "off" state. However, stray photons *do* touch the primary receptor neurons in the "off" state, resulting in random firing. Likewise, individual neurons may have a different threshold for triggering action potentials due to fatigue (too much light) or other factors - thus even the "on" state shows variability.

Note that both of the above types of code are very well suited to the "topographical" code described above. In fact, auditory neurons tend to utilize frequency codes, and visual neurons utilize On-Off codes within the topographical "wiring of auditory and visual cortex, respectively.

More complex coding is typified by the "place cells" of hippocampus. Originally described by J. O'Keefe and J. Dostrovsky of University College London in 1971 (Brain Research, Volume 34, Issue 1, Pages 171-175), a hippocampal "place cell" is a neuron that only becomes most active when the animal is in a particular place in its environment . Within that "place field" these neurons appear to utilize a frequency code to represent distance from the center of the field. As the animal moves through the field, the neuron will fire action potentials that are also entrained to one of the background oscillations of the brain [subject of the next blog]. When the animal reaches another location, a different place cell begins to fire, and the first returns to a background firing rate (lower left in figure above). Thus the Place Cell incorporates distance, speed and directionality in its firing. An animal's entire traversal through an environment can be tracked if enough Place Cells are recorded, and neuroscientists have since discovered neurons in connected brain areas that represent direction, body and head angle, visual mapping features, and even a "coordinate system" that underlies this "cognitive map."

The final type of code is the one utilized by most of the rest of the Cortex - a sparse, distributed code (Figure, lower right). This code is so named because the neurons that form the code are often *not* located close to each other, the adjacent neurons do not fire with the same correlate to the stimulus, and only a few neurons out of any given brain area appear to be active in the code at a given time. The sparse, distributed code is in fact a combination of the other three types of code - it includes frequency elements, on-off elements, and "mapping" elements. It requires a defined topography of connections, but these are frequently self-organizing in a hierarchical manner. Thus new information can easily be added to the network by forming (or reinforcing) new connections between neurons. The sparse, distributed network is the least well understood, yet it is the easiest to model using neural network and advanced mathematical and statistical models. The reason is because such models rely on being able to extract arbitrary correlations and "mapping" those relationships across multiple dimensions in a manner that appears random when projected back onto the three-dimensional patterns with which we are familiar. Yet, the neural connections appear to do the same thing. The example above shows neurons from rat brain active during the encoding and recall of information in a memory task. We can clearly see which neurons are active in each phase, even though the brain structure is not apparent. In fact, mathematical analysis and modeling can suggest how the neurons "ought" to be connected, and we frequently find that thos connections exist, although they defy a purely "topographical" approach.

One option is to assume that instead of a "topography", such neurons represent a "topology" - a transformed space that is connected, but "stretched" or "deformed" from what we think is normal. Another option is to assume that the neurons self-organize - in other words, through their connections, each neuron "knows" its inputs and outputs, even though it would take painstaking measurement to find them all.

This is one of the challenges of building a computer that can serve as a model of the brain. We can pattern all of the connections, but may not have the appropriate code. Here we see the code, but the connections are too complicated to map. Nonlinear modeling is very useful in understanding, but it still does not provide the final product of "wiring" plus "code". Sometimes we just have to ask "What's the code?" and proceed to *use* the code without necessarily understanding it!

Until next time - take care of your brain, and don't worry *too* much that it is hiding all of its secrets in the code!

Sunday, May 1, 2011

Sorry to keep diverting away from The Lab Rats' Guide, so I will stick this in as a bonus odd-day post.

On Facebook the other day, a friend asked my opinion of an article that laments a major scientific funding organization's policy of funding questionable research that supports national policy directives while making it near impossible to get funded for research which could possibly disprove or alter our fundamental understanding of the same issues.

"Often, an agency’s request for proposal, or RFP, reads like a legal document, constricting the applicant to stay within very narrow and conventional bounds, with no profound scientific questions posed at all. Many RFP’s are so overly specific that they amount to little more than work for hire. Those who know how to play the game simply reply to RFP’s with parroted responses that echo the language in the proposal, in efforts to convince the reviewers that their programs exactly fit the conditions of the RFP."

The atmosphere being created by the present system in academic science is joyless. Good scientific research requires dedication, patience, and enthusiasm and a high degree of passion for the chosen subject. Overhearing conversations in the corridors of my own institution, I am struck by the fact that the topics are almost always related to proposal writing and funding and not to scientific ideas.

While I have not seen the blatant pressure to fund "politically correct" science, I can certainly attest to the fact that so much more emphasis is now placed on the difficulties in obtaining funding. Part of this difficulty is in the peer review process. A given committee member on a scientific review panel (for NIH research grant applications) will probably have to review 30 grants this year. There are three review sessions spaced throughout the year. A given reviewer will be assigned 10 grants each session, and each grant will have 3 main reviewers out of a committee of 20 people. They will then meet (usually in Washington, D.C.), discuss and assign scores for each application. Only "best" half of the applications will get discussed by the committee (the "lower half" gets only a written review) and all committee members can discuss and recommend scores. The current funding climate is such that only about 5% of all applications can be funded, so it is highly competitive. However, it is very true that "flashy" and "relevant" projects will have the appeal to ensure that not only will the application be discussed (i.e. "upper half"), but that it will be well-scored.

On the other hand, I can certainly see evidence of how political ambitions can attempt to subvert science. One need only to read The Congressional Record to see attempts by Senators and Representatives to insert their favorite cause into the funding bills for various agencies that support scientific research.

However, as I see it, all of government funded research is potentially in trouble.

The U.S. national budget just cannot sustain our current level of funding - yet instead of cutting out the bureaucracy and regulatory burden, the "low hanging fruit" will be money that is paid out to non-entitlement programs. That means we can expect to see *way* less funding for NIH and NSF. We're already feeling the pinch. One alternative is that there *are* research dollars in "Congressionally Directed Research Programs" and the like - basically earmarked money for the projects that Reps and Senators can get each other to vote for. That means "popular" (or at least high-profile) projects like climate, AIDS, cancer, etc.

The result is more *directed* research on particular topics. *Undirected* research (what we call Basic Science) is much less likely to be funded, but it is the Basic Science that often yields alternatives that the "directed" research will not or can not discover (because it is not in the direction of funding). As such, LabRat Intelligence research cannot in fact be expected to disprove LabRat Intelligence. [Note - this is not a political statement or denial - it is a statement of *how* the funding and research is set-up.] First, if a project is specifically set up to research LabRat Intelligence, it makes an implicit assumption that LabRat Intelligence exists. Then, if the Null Hypothesis is not carefully conceived, it also starts from an assumption that LabRats are intelligent [Meanwhile I am being stared at by Ratley!]. The third pitfall is failure to consider alternatives - perhaps the intelligent creatures are really extraterrestrial? Then we have the flawed assumption that they were lab rats! SO *directed* research can very easily fall into the problem that the *direction* of the research precludes the possibility that alternative explanations are possible.

Another *real* example. Alzheimer's Disease can be expected to continue to receive funding. So will PTSD research, and to a lesser extent, drug abuse. A common factor in all three is memory. The "Basic Science" approach would be to learn as much as we can about memory, and along the way learn how disease, stress and drugs alter memory. The *directed* approach is to *assume* that Alzheimer's disease has a particular effect on memory and find a way to stop it. While useful, the research can all too easily miss basic principles that are applicable to *many* things, such as other diseases, age effects, general health status, or brain-machine interfacing.

Other sources of research funds are private foundations - which typically focus only on one disease - or drug companies, that want to know if their drug will work (or how to make it work). All of which serves to focus too much on the result and not the inquiry which is the root of scientific discovery. There *are* private sources of science funding, but I fear all too much of science funding is headed toward *directed* (or even "micromanaged") rather than *basic* research.

My friend tells me that he is *way* more optimistic than I about private sources for scientific funding. I won't argue the point, nor will I deny that there is a place for continued federal scientific funding. I do know that many people are earning PhDs and not continuing into academic science and research. Still more acadmeic scientists are retiring early or finding other professions. I don't know the solution, and I do not think it will be easy.

It *does* have me worried, because it portends another scientific "Dark Ages" if we don't figure out that we still have a need for the type of scientific research that says "What if?".