The View

The past few decades have ushered in a revolution in neuroscience -- the study of how the human brain works. Yet surprisingly little of this understanding and its impact have seeped into the public consciousness. We still talk about it as if we lived in the year 1950. Our popular understandings are grossly outdated and often quite mistaken.

In his recent popular work, Who’s In Charge: Free Will and the Science of the Brain (2011), Professor Michael Gazzaniga seeks to correct these misapprehensions and present a modern model based on his lifetime of work in the area. In doing so, he – unknowingly to him – brings the subject clearly into focus under the Lenses of Wisdom. As it turns out, recent developments in neuroscience confirm the validity and usefulness of much of that wisdom.

Professor Gazzaniga first describes the recent history of neuroscience and debunks many of the popular misconceptions that most of us learned in our misspent youth and have taken as gospel in our misspent adulthoods. In doing so, he reminds us once again that "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." In brief:

In the first half of the 20th Century, the concept of behaviorism – that the brain merely reacted to stimuli -- reigned supreme. Based largely on experiments conducted on reptiles and rats, it was believed that the brain operated under two principles: mass action and equipotentiality. Mass action meant that the action of the brain as a whole determined its performance. Equipotentiality meant that any part of the brain could carry out a given task, and that there was no specialization. Thus, the behaviorists believed that the brain was essentially a blank slate that could be easily programmed with the right external inputs. In the 1930s Professor John Watson of Johns Hopkins famously and erroneously asserted: “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select— doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”

These ideas were debunked beginning in the late 1940s. It was theorized and then confirmed that the brain functioned through the firing and connectivity of neurons, not as a blank gray equipotential mass. From there it was discovered that the brain functioned through specialized neural networks. But how these neural networks came into being and actually functioned remained mysterious.

During the 1960s, research was conducted to determine whether human brains were functionally different from animal brains or were just “bigger animal brains” as had been theorized since Darwin’s time – the so-called “Big Brain Theory.” The research revealed that human brains were in fact organized in fundamentally different ways from those of animals, even those most closely related. Based on that research, evolutionary biologist Charles Oxnard concluded: “The nature of human brain organization is very different from that of chimpanzees, which are themselves scarcely different from the other great apes and not too different even from Old World monkeys.” In other words, as the human brain evolved, it was not that additional skills are simply being added on as once was hypothesized under the Big Brain Theory, but that the whole brain was rearranged and reformulated in different ways.

Further subsequent research over the next 40 years revealed that the brain was formulated in a series of modules, with greater numbers of connected neurons within the modules and fewer connections between the modules. Neuroimaging studies show that these modules operate like parallel circuits that processing different inputs simultaneously. One part of the brain reacts when you hear words, another particular part of the brain reacts to seeing words, still another area reacts while speaking words, and they can all be going at the same time.

Figuring out the organization of the modules proved to be a devilishly difficult task. The brain is asymmetric, having a dominant half, which is usually the left brain in most people. Certain modules appear in both hemispheres, while other modules appear in only one. And they are not in the same place or of the exact same construction in every person. Moreover, the same structures in animals often act differently in the human brain, making scientists more wary of cross-species comparisons. Indeed, research in the past ten years has revealed that the neurons themselves may be different from species to species and perform different functions even if there are similarities in form. As Professor Gazzaniga relates it: “All neurons are not alike, and some types of neurons may be found only in specific species. Moreover, a given type of neuron may exhibit unique properties in a given species.” This called into question what could be concluded from much of the prior research that had only been conducted on animals.

Gazzaniga’s research life began in earnest in the 1960s with the study of split-brain patients who had had the connections between the two hemispheres surgically separated to cure problems with seizures. This resulted in some interesting conundrums. As Gazzaniga observed in 1972:

"“Over the past ten years we have collected evidence that, following midline section of the cerebrum, common normal conscious unity is disrupted, leaving the split-brain patient with two minds (at least), mind left and mind right. They coexist as two completely conscious entities, in the same manner as conjoined twins are two completely separate persons.”

This posed the problem of whether each consciousness had its own protagonist: Were there then two selves? Were there also two free wills? Why aren’t the two halves of the brain conflicting over which half is in charge? Is one half in charge? Were the two selves of the brain trapped in a body that could only be at one place at one time? Which half decided where the body would be? WHY WHY WHY was there this apparent feeling of unity? Was consciousness and the sense of self actually located in one half of the brain?”

This theory – known as the “Dichotomous Brain Theory” that many of us learned about in the 1970s and 1980s, has also been debunked as too simplistic and only partially accurate. As Gazzaniga describes:

“Our findings gradually indicated to us that both halves of the brain had specializations, but each half of the brain was not equally conscious, that is, it was not conscious of the same things, and not equally capable of performing tasks. This was rotten enough for dichotomous brain theory, but absolutely stinking for the existing concepts about the unity of consciousness. Back to the drawing board with the question, Where is this conscious experience coming from? Does the information get processed and then channeled through one kind of conscious activation center that makes subjective experiences aware to you and me, or is it organized differently? The scales were tipping toward a different type of organization; a modular organization with multiple subsystems. We began to doubt that a single mechanism existed that enables conscious experience, but rather were heading toward the idea that conscious experience is the feeling engendered by multiple modules, each of which has specialized capacities. Since we were finding specialized capacities in all different regions of the brain and since we had seen that conscious experience was closely associated with the part of the cortex involved with a capacity, we came to understand that consciousness is distributed everywhere across the brain. Such an idea was directly contrary [the idea that] the left hemisphere as the site of consciousness.”

The most surprising findings are that there does not appear to be any “master control” module, “central processor” or sub-brain that represents the self by itself. This conclusion has been difficult for many to accept, including many neuroscientists. In fact, Gazzinaga and others have concluded that the brain is a complex adaptive system – that thing which we see when we look at the brain and mind through the Fractal Lens.

Now looking through that lens, we see how neuroscience is catching up with and catching on to complexity theory. Gazzaniga notes the parallel yet independent development of complexity theory with neuroscience:

“Examples of complex systems are popping up all over the place: weather and climate in general, the spread of infectious disease, ecosystems, the Internet, and the human brain. Ironically for psychology in its quest to fully understand behavior, the signature phenomenon of a complex system “is the multiplicity of possible outcomes, endowing it with the capacity to choose, to explore and to adapt.” . . . Relevant to our current question about feeling unified and in control is an important point that Northwestern University’s physicist Luis Amaral and chemical engineer Julio Ottino make: “The common characteristic of all complex systems is that they display organization without any external organizing principle being applied.” That means no head honcho, no homunculus.”

Putting this all together, Gazzinaga summarizes:

“The view in neuroscience today is that consciousness does not constitute a single, generalized process. It is becoming increasingly clear that consciousness involves a multitude of widely distributed specialized systems and disunited processes, the products of which are integrated in a dynamic manner by the interpreter module. Consciousness is an emergent property. From moment to moment, different modules or systems compete for attention and the winner emerges as the neural system underlying that moment’s conscious experience. Our conscious experience is assembled on the fly, as our brains respond to constantly changing inputs, calculate potential courses of action, and execute responses like a streetwise kid.. . . [Yet] we do not experience a thousand chattering voices, but a unified experience. Consciousness flows easily and naturally from one moment to the next with a single, unified, and coherent narrative.

In effect, what the neuroscience is saying is that the brain is a complex adaptive system that creates what we call the "mind", or consciousness, as one of its emergent properties. As an emergent property, while it is possible to know by analyzing brain circuitry and scans that thought is being produced, it is impossible to know by looking at such analyses what a mind is actually thinking. Gazzaniga elaborates further as to the properties of emergent systems:

"Emergence is a common phenomenon that is accepted in physics, biology, chemistry, sociology, and even art. When a physical system does not demonstrate all the symmetries of the laws by which it is governed, we say that these symmetries are spontaneously broken. Emergence, this idea of symmetry breaking, is simple: Matter collectively and spontaneously acquires a property or preference not present in the underlying rules themselves. The classic example from biology is the huge, towerlike structure that is built by some ant and termite species. These structures only emerge when the ant colony reaches a certain size (more is different) and could never be predicted by studying the behavior of single insects in small colonies. . . .

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5: 15 P.M. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would ever occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren’t predicted from the parts alone.”

So, besides consciousness itself, what are the other emergent properties of the brain that make up the human mind? For this, we take in the view through the Prospecting Lens. You might want to look a that page and the Lenses of Wisdom page for a refresher.

In fact, the Prospecting Lens, with its relatively automated menu of System 1 heuristics and its slow but deliberative System 2 analytics almost perfectly illuminates many of the emergent properties of the mind.

There are two more fundamental emergent properties, though, that underpin many of the System 1 heuristics: first, our propensity to construct narratives to explain actions; and second, our propensity to project agency, thoughts or volition on actors (other people, animals, etc.) whom we observe. These lead to System 1 cognitive biases that include Coherent Stories (Associative Coherence), the Confirmation Bias, Overlooking Luck, Intuitive Predictions, the Narrative Fallacy and the Hindsight Illusion, among others.

To understand these two underlying emergent properties, we again consult Professor Gazzaniga. As to the first, in the 1980s Gazzaniga and others began developing a theory of “the interpreter.” They discovered that the left or dominant side of the brain engages in a process of finding explanations for events or unconscious actions that have already taken place – in other words, constructing post-hoc narratives of cause and effect as to why a person has taken a certain action. There is no one specific area of the dominant hemisphere that acts as “the interpreter;” it emerges instead from the interaction of several areas of the left brain. The interpreter seeks patterns and imagines causation even when causation between events A and B is not present, even “filling in the blanks” when there is a lack of data. As Professor G puts it:

"The psychological unity we experience emerges out of the specialized system called “the interpreter” that generates explanations about our perceptions, memories, and actions and the relationships among them. This leads to a personal narrative, the story that ties together all the disparate aspects of our conscious experience into a coherent whole: order from chaos. The interpreter module appears to be uniquely human and specialized to the left hemisphere. Its drive to generate hypotheses is the trigger for human beliefs, which, in turn, constrain our brain.”

As to the second principle or emergent property, we must look to what is known as the "Theory of Mind". As Professor G explains:

"[I]n 1978 David Premack came up with a fundamental idea that now governs so much of social psychological neuroscience work. He realized that humans have the innate ability to understand that others have minds with different desires, intentions, beliefs, and mental states, and the ability to form theories, with some degree of accuracy, about what those desires, intentions, beliefs, and mental states are. He called this ability theory of mind (TOM) and wondered to what extent other animals possessed it. Just the fact that he wondered if other animals possessed it sets him apart from most of us. Most people assume that other animals, especially cute ones with big eyes, have a theory of mind, and many of us even project this onto objects. In fact, within seconds, this response can be elicited in the presence of Leonardo, a socially programmed robot at MIT, who looks like a puckish cross between a Yorkshire terrier and a squirrel that is two and a half feet tall. While observing the behavior of what appears to be a self-propelled and goal-oriented robot, just as babies watching the triangle trying to get up the hill, we automatically see the robot as having intentions and we come up with psychological theories, that is, interpretations, about why Leo is behaving in a certain way, just as we do with other people (and our pets).

Once you understand the power of this mechanism, what activates it, and how we humans apply it to everything from our pets to our cars, it is easy to understand why anthropomorphism is so easy to resort to, and why it can be so hard for humans to accept that some of their psychological processes are unique. We are wired to think otherwise. . . . TOM is fully developed automatically in children by about age four to five, and there are signs that it is partially, or even fully, present by eighteen months. Interestingly, children and adults with autism have deficits in theory of mind and are impaired in their ability to reason about the mental states of others, and, as a result, their social skills are compromised.”"

One can easily see how the interpreter allows us to construct narratives about ourselves, while the theory of mind allows us to construct narratives about others. These in turn, individually and combined, help us construct the behaviors that were catalogued by Kahneman & Tversky, and that we observe through the Prospecting Lens.

On a more "meta" level, Gazzaniga notes that many neuroscientists are currently suffering from their own cognitive bias and dissonance about the application of complexity theory to their world, namely the System 1 heuristic known as THEORY-INDUCED BLINDNESS. To wit from Kahneman: "Once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing." When the blinders fall off the previously believed error seems absurd and the real breakthrough occurs when you can’t remember why you didn’t see the obvious. Potential for error: Clinging to old paradigms that have outlived their validity.

Here, that old paradigm as to the brain and the mind is what is known as scientific reductionism, which the Fractal Lens and complexity theory have been overtaking and replacing in many fields. In each case, there has been fierce resistance from some quarters, and neuroscience is no different. Professor Gazzinaga notes that during his career, most neuroscientists have been “hard determinists”, tending to believe that with enough information, the cause of every thought could be determined. This idea is a consequence of scientific reductionism that he traces back to Newton and that reigned supreme in virtually all sciences into the 20th Century. Yet most other sciences have now abandoned this idea in favor of complexity-theory or emergence, beginning with physics in the mid-20th Century:

“So in some part because of chaos theory and perhaps more so because of quantum mechanics and emergence, physicists are sneaking out the determinism back door, with their tails between their legs. Richard Feynman, in his 1961 lectures to Caltech freshmen, famously declared: “Yes! Physics has given up. We do not know how to predict what would happen in a given circumstance, and we believe now that it is impossible— that the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our earlier ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it. . . . So at the present time we must limit ourselves to computing probabilities. We say ‘at the present time,’ but we suspect very strongly that it is something that will be with us forever— that it is impossible to beat that puzzle— that this is the way nature really is.””

He goes on to discuss the state and fate of hard determinism in neuroscience:

“So the hard determinists in neuroscience make what I call the causal chain claim: (1) The brain enables the mind and the brain is a physical entity; (2) The physical world is determined, so our brains must also be determined; (3) If our brains are determined, and if the brain is the necessary and sufficient organ that enables the mind, then we are left with the belief that the thoughts that arise from our mind also are determined; (4) Thus, free will is an illusion, and we must revise our concepts of what it means to be personally responsible for our actions. Put differently, the concept of free will has no meaning. The concept of free will was an idea that arose before we knew all this stuff about how the brain works, and now we should get rid of it.

There is no disagreement among the neuroscientists about the first claim, that the brain enables the mind in some unknown way and the brain is a physical entity. Claim 2, however, has become a loose link and is under attack: Many physicists are no longer sure that the physical world is predictably determined because the nonlinear mathematics of complex systems does not allow exact predictions of future states. Now we have claim 3 (that our thoughts are determined) on shaky ground. Although some neuroscientists think we may prove that specific neuronal firing patterns will produce specific thoughts and that they are predetermined, none has a clue about what the deterministic rules would be for a nervous system in action. I think that we are facing the same conundrum that physicists dealt with when they assumed Newton’s laws were universal. The laws are not universal to all levels of organization; it depends which level of organization you are describing, and new rules apply when higher levels emerge. Quantum mechanics are the rules for atoms, Newton’s laws are the rules for objects, and one couldn’t completely predict the other. So the question is whether we can take what we know from the micro level of neurophysiology about neurons and neurotransmitters and come up with a determinist model to predict conscious thoughts, the outcomes of brains, or psychology. Or even more problematic is the outcome with the encounter of three brains. Can we derive the macro story from the micro story? I do not think so.

I do not think that brain-state theorists, those neural reductionists who hold that every mental state is identical to some as-yet-undiscovered neural state, will ever be able to demonstrate it. I think conscious thought is an emergent property. That doesn’t explain it; it simply recognizes its reality or level of abstraction, like what happens when software and hardware interact, that mind is a somewhat independent property of brain while simultaneously being wholly dependent upon it. I do not think it possible to build a complete model of mental function from the bottom up.” . . .

"Yet, emergence is mightily resisted by many neuroscientists, who sit grimly in the corner and continue to shake their heads. They have been celebrating that they have finally dislodged the homunculus out of the brain. They have defeated dualism. All the ghosts in the machine have been banished and they, as sure as shootin’, are not letting any back in. They are afraid that to put emergence in the equation may imply that something other than the brain is doing the work and that would let the ghost back into the deterministic machine that the brain is. No emergence for them, thank you! I think this is the wrong way for neuroscientists to look at the problem. Emergence is not a mystical ghost but the going from one level of organization to another. You, alone on the proverbial desert island, or for that matter, alone in your house on a rainy Sunday afternoon, follow a different set of rules than you do at a cocktail party at your boss’s house."

It appears that although the jury may be out, the die is cast, as it was in physics when quantum theory arrived on the scene. And as Max Planck famously observed, "Science advances one funeral at a time." -- meaning that Theory Induced Blindness is a powerful System 1 heuristic indeed, and affects even the most erudite thinkers. But time will pass and views will change to match the best available scientific models.

Now taking a brief gaze through the Mimetic Lens, we can see how neuroscience confirms its validity, even though Girard's theory was formulated before the experimental science was conducted and without reference to it. This is important, because when you see ideas converge independently from different directions, you know that they are very powerful ideas indeed.

First, the research confirms that mimetics are fundamental and innate:

“Babies first enter the social world through imitation. They understand they are like other people and imitate human actions, but not those of objects. This is because the human brain has specific neural circuits for identifying biological motion and inanimate object motion, along with specific circuits to identify faces and facial movement. A baby cannot do much to enter the social world and form a link with another person before she can sit up, control her head, or talk. But she can imitate. When you hold a baby, what links the two of you together in the social world are her imitative actions.”

Second, the Theory of Mind mechanism, which allows us to step into the minds of others and imagine ourselves as them abstractly, has also been confirmed as an innate feature of the human mind:

“It turns out that we are wired from birth for social interactions. A great many of our social abilities come hardwired from the baby factory. The advantage of hardwired abilities, of course, is they work immediately and don’t have to be learned, as opposed to all of the survival skills that do. David and Ann Premack got the ball (or I should say triangle) rolling on the studies of intuitive social skills by looking for what, if any, social concepts toddlers understood. It had been shown in the early 1940s that, when presented with films of geometric shapes moving in ways that suggest intention or goal-directed behavior (moving in ways that an animal would move), people will even attribute desires and intentions to geometric figures. The Premacks demonstrated that even ten- to fourteen-month-old infants, watching objects that appeared to be self-propelled and goal-oriented, automatically interpreted the objects as intentional, and, more important, they assigned a positive or negative value to the interaction between intentional objects.”

Third, the discovery of so-called "mirror neurons" shows how these features of the mind are connected with the human brain and that the way they are employed in human beings appears to be uniquely expansive:

“In the mid-1990s, while they were studying the grasping neurons in macaque monkeys, Giacomo Rizzolatti and his colleagues discovered something quite remarkable and soon realized that they had come across the cortical origins of how an animal could appreciate the mental state of another. They found that when a monkey grasps a grape, the very same neuron fires as when the monkey observes another individual grasping a grape. They called these mirror neurons, and they are one of the great recent discoveries in neuroscience. They were the first concrete evidence that there is a neural link between observation and imitation of an action, a cortical substrate for understanding and appreciating the actions of others. Since these original observations, mirror neuron systems that are quite different and much more extensive than those of the macaque have been identified in humans. The mirror neurons in the monkey are restricted to hand and mouth movements and only fire when there is goal-directed action, which may be why monkeys have very limited imitation abilities. In humans, however, there are mirror neurons that correspond to movements all over the body, and they fire even when there is no goal; in fact, the same neurons are active even when we only imagine an action. The mirror neurons are implicated not only in the imitating of actions, but also in understanding the intention of actions.”

Fourth, and perhaps most interesting, the research that Gazzaniga describes even confirms that the brain operates -- or rather fails to engage -- in a specific and unique way when the individual participating in the dehumanization process that is part of scapegoating. As he relates:

“The [] unconscious brain process that may bias proceedings, dehumanizing out-groups, has been studied by Lasana Harris and Susan Fiske. They found that, when American subjects view certain social groups, different emotions are elicited depending on what group it is. The emotions of envy (when viewing the rich), pride (seeing American Olympic athletes), and pity (while viewing photos of elderly people) are all associated with activity in the area of the brain (the medial prefrontal cortex, or mPFC) that is activated in social encounters. However, the emotion of disgust (looking at photos of drug addicts) is not. The activation patterns in the mPFC while viewing photos of social groups that elicited disgust were no different when the subjects viewed objects, such as a rock. This suggests that members of groups that elicit disgust, which are extreme out-groups, are dehumanized. This is what occurs during war: the enemy group elicits disgust and is dehumanized and pejoratively labeled.”

And this in turn can lead to archaic-sacred forms of "justice" through sacrificial violence to maintain societal order:

“Utilitarian justice also may punish one person to deter others, the severity need not relate to the actual offence: A thief of a CD player could receive a harsh sentence to deter others from thieving. . . . The extreme case can even be made that the punished need not even be guilty, just thought guilty by the public. An innocent person could be arrested as a scapegoat and their imprisonment could stave off a vigilante effort or riot for the greater good."

And indeed, that is how scapegoating has traditionally worked to cement societies together in collective violence against the chosen scapegoat, from ancient stonings to modern lynchings, literal or in essence.

* * * * * * * * *

This has been a long slog of a blog entry and I appreciate anyone who has had the patience to read through it.

But what about that "free will" versus "determinism" question referenced above that has plagued so many minds for so long? The last few decades of research into neuroscience and the parallel development of complexity theory indicate that the question probably does not make much sense as classically posed. While an individual brain is best described in a deterministic way, in the higher-order emergent world of multiple brains and people where we live, we most certainly are held responsible for our own actions, and with good reason. In fact, the multi-brain environment feeds back into the brain which then feeds back into the environment, and deterministic "causes" cannot be separated out. As the good Professor explains:

"In the end, my argument is that all of life’s experiences, personal and social, impact our emergent mental system. These experiences are powerful forces modulating the mind. They not only constrain our brains but also reveal that it is the interaction of the two layers of brain and mind that provides our conscious reality, our moment in real time. Demystifying the brain is the task of modern neuroscience. To complete that job, however, will require neuroscience to think about how the rules and algorithms that govern all of the separate and distributed modules work together to yield the human condition.

Understanding that the brain works automatically and follows the laws of the natural world is both heartening and revealing. Heartening because we can be confident the decision-making device, the brain, has a reliable structure in place to execute decisions for actions. It is also revealing, because it makes clear that the whole arcane issue about free will is a miscast concept, based on social and psychological beliefs held at particular times in human history that have not been borne out and/ or are at odds with modern scientific knowledge about the nature of our universe. As John Doyle has put it to me:

“Somehow we got used to the idea that when a system appears to exhibit coherent, integrated function and behavior, there must be some “essential” and, importantly, central or centralized controlling element that is responsible. We are deeply essentialist, and our left brain will find it. And as you point out, we’ll make up something if we can’t find it. We call it a homunculus, mind, soul, gene, etc. . . . But it is rarely there in the usual reductionist sense. . . . That doesn’t mean there isn’t in fact some “essence” that is responsible, it’s just distributed. It’s in the protocols, the rules, the algorithms, the software. It’s how cells, ant hills, Internets, armies, brains, really work. It’s difficult for us because it doesn’t reside in some box somewhere, indeed it would be a design flaw if it did because that box would be a single point of failure. It’s, in fact, important that it not be in the modules but in the rules that they must obey.”

And finally:

“[U]ltimately responsibility is a contract between two people rather than a property of a brain, and determinism has no meaning in this context. Human nature remains constant, but out in the social world behavior can change. Brakes can be put on unconscious intentions. I won’t throw my fork at you because you took a bite of my biscuit. The behavior of one person can affect another person’s behavior. I see the highway patrolman coming down the onramp and I check my speedometer and slow down. As I said in the last chapter, the point is that we now understand that we have to look at the whole picture, a brain in the midst of and interacting with other brains, not just one brain in isolation. No matter what their condition, however, most humans can follow rules. Criminals can follow the rules. They don’t commit their crimes in front of policemen. They are able to inhibit their intentions when the cop walks by. They have made a choice based on their experience. This is what makes us responsible agents, or not."

I could not have said it better. So I will not attempt to do so. But if you would like to see the Gifford lectures where Professor Gazzaniga lays out this research, they are available below: