Control: Conscious and Otherwise

by Christopher Suhler and Patricia Churchland

Introduction
An important notion in moral philosophy and many legal systems is that certain circumstances can mitigate an individual’s responsibility for a transgression. Generally speaking, such situations are considered extenuating in virtue of their exceptional influence on a person’s ability to act and make decisions in a normal manner. The essence of the case for diminished responsibility is that these special circumstances impede the ability of a normal person to exercise self-control.

In recent years, however, this notion of diminished responsibility has come to wider attention in a quite unexpected way. Some researchers, drawing on findings from social psychology, have argued that situational forces may play a much larger role in behavior than traditionally assumed. The situational forces in question are often entirely ordinary, mundane and seemingly trivial. Given that such influences are pervasive, the general issue raised concerns control in commonplace cases. According to a condensed version of this view – which we call the Frail Control hypothesis for convenience – even in unexceptional conditions, humans have little control over their behavior. If correct, this line of argument could have widespread and dramatic ramifications, notably for our practices of attributing moral and legal responsibility. (We note that although in certain rare cases control and responsibility come apart, in most cases of moral and legal responsibility attribution, control and responsibility are closely linked.)

While agreeing that moral philosophy and the law can benefit from a greater understanding of developments in psychology and neuroscience, we suggest that the Frail Control challenge is markedly weakened once a wider range of data is considered. In our assessment, the Frail Control hypothesis underestimates the vigor of normal goal-maintenance in the face of distractions, and neglects the role of nonconscious aspects of control as displayed, for instance, in the exercise of cognitive, motor and social skills. Furthermore, a large psychological literature has demonstrated that nonconscious, automatic processes are pervasive and anything but “dumb”. Instead, they are often remarkably sophisticated and flexible in performing functions such as goal pursuit that were once considered the sole province of conscious cognition (Bargh & Morsella, in press) [3].

On the basis of these and other data, we develop an account of control that we believe goes some way toward sharpening the meaning of control – including nonconscious control – in a way that accommodates the role of nonconscious processes in nearly everything we do. the general conclusion we will be arguing for is that nonconscious processes can support a robust form of control and, by extension, that consciousness is not a necessary condition for control. One notable feature of our account is a model of control in which neurobiological criteria, rather than intuitive or behavioral criteria alone, define the boundaries of control. A significant virtue of this account, in light of the pervasiveness of automatic processes in our cognitive lives, is that it is agnostic as to whether the underlying processes are conscious or nonconscious.

Frail Control
A leading advocate of the Frail Control hypothesis is the philosopher John Doris (e.g., Doris, 1998, [7]). He bases his claims on a range of data from social psychology showing that choices can be affected by various manipulations, such as priming (often below the level of consciousness) or ostensibly banal environmental features. For example, subjects exposed to words related to rudeness on a scrambled-sentence task are subsequently more likely to interrupt a (staged) conversation between the experimenter and another person than are subjects primed with words related to politeness or controls who are not primed (Bargh et al., 1996) [8]. Other studies show that people are more likely to litter in a particular setting when it is heavily littered than when the same setting is clean (Keizer et al., 2008) [9]. (For reviews, see Bargh & Morsella, in press; Nisbett & Wilson, 1977) [3, 10].

Adding to the surprise, the data appear to show that very minor environmental influences can at times produce large effects. Among the examples Doris cites are the finding by Isen and Levin (1972) [11] that “[p]assersby who had just found a dime were twenty-two times more likely to help a woman who had dropped some papers than passersby who did not find a dime” (Doris & Murphy, 2007) [12, p. 34] and the finding by Darley and Batson (1973) [13] that “[p]assersby not in a hurry were six times more likely to help an unfortunate who appeared to be in significant distress than were passersby in a hurry” (Doris & Murphy, 2007) [12, p. 34].

These data are connected to the issue of responsibility in the following way: if your choice is strongly affected by situational factors in ways that you are unaware of, then you plausibly have an excuse for your actions. Doris echoes widespread philosophical assumptions when he says that to be responsible we must have normative competence, meaning that we consciously weigh the evidence, effectively deliberate, and make a decision (Doris, 2002) [14, p. 136]. If the deciding and weighing is below the level of consciousness, normative competence is compromised. No normative competence, no responsibility. (Other statements of Frail Control positions can be found in Wilson (2002) [15], Harman (1999) [16], Bargh (2008) [17], Wegner (2002) [18], and Appiah (2008) [19], as well as a recent news feature in Nature (Buchanan, 2009) [20].

The conclusion that our actions are much more frequently excusable than hitherto assumed could have monumental implications for the law, both criminal and civil, as well as our daily social interactions. A rather different picture of control emerges, however, once the range of data is expanded to include neurobiological, clinical, and other behavioral data, as well as considerations from evolutionary biology.

The co-evolution of control and situational responsiveness
The co-evolution of sensitivity in responding to a diverse array of environmental stimuli and the capacity for executive control is highly probable (Baumeister, 2005; Dennett, 2002) [2, 21]. Generally speaking, if an organism is to reap the benefits of adaptive responsiveness to its environment, it must also be able to control how and to what it responds.

Observations of mammalian behavior suggest that mature animals do indeed regularly exhibit control. A cougar that can carefully stalk a deer will do better than one who just runs after it; antelope that go skittering off every time they glimpse a lion in the distance are apt to waste excessive amounts of energy. And laboratory experiments show that rats can defer gratification to obtain a larger reward (Dalley et al., 2004) [22] or be trained to stop an already-initiated bar press (Eagle et al., 2008) [23]; relevant behavioral differences in these tasks are described as differences in the capacity for control, and the circuitry underlying these capacities is an object of study.

Mechanisms for exercising control in numerous species, hominins included, were probably selected for in conditions that favored being able to defer gratification, wait for the advantage, plan ahead, undertake a complex, multi-step action, and so on. In discussing control as a deep and general feature of animal behavior, Baumeister makes the point that the desire for control, both of physical and social conditions, is fundamental to reproductive success. He thus remarks that “if control is part and parcel of getting most of the things one wants in life, a person could evade wanting control only by not wanting anything” (Baumeister, 2005) [21, p. 96]. The pursuit of goals and achievement of them requires some measure of control, and the longer the lag time or the more obstacles in the path, the greater the need for control.

In the social environment of one’s own species, the capacity to exercise control and select an appropriate action is perhaps even more critical. For example, hierarchy is extremely important in chimpanzee and baboon troops. If an individual is to avoid social ostracism (or worse), he must be able to exert substantial control in managing feeding (Tomasello et al., 2003) [24] and mating (Crockford et al., 2007) [25] opportunities, and in seeking entry into new troops (Sapolsky, 2002) [26].

In modern human culture, exercising control to adjust to and thrive in one’s social environment is likewise paramount. In line with this, Baumeister (2005) observes that humans have an expanded repertoire of ways to satisfy the desire for control [21]. Humans exhibit control by, for instance, attending school, learning to build a house, maintaining a garden or farm animals, or going to work regularly, and such control tends to pay off over the course of a lifetime (Bembenutty & Karabenick, 2004; Mischel et al., 1989) [27, 28]. Skills of self-discipline and self-control are acquired by maturing children as a result of social pressure from many directions, including from peers (Blair & Diamond, 2008) [29].

In circumstances where nothing much hangs on doing A rather than B, vigilance may be lower and situational factors more significant. While pursuing a goal, one can encounter many “fringe” choices – whether to pick up a piece of litter, for example. Nevertheless, how one decides these fringe choices has very little to do with the normal function of executive control in pursuit of a goal. While attending to a task that has interrupted the pursuit of an important goal, people typically experience frequent intrusive thoughts about the goal, getting back to the goal, how to complete the current task quickly, and so on. Demonstrated experimentally and sometimes referred to as the Zeigarnik effect (Förster et al., 2005; Zeigarnik, 1927) [31, 32], this phenomenon implies that nonconscious processes continue to keep the goal high in priority until resumption of the goal-related action, no matter the interruption by task-irrelevant contingencies. Rather than frail control, this phenomenon and the others described above bespeak rather stalwart and sturdy control.

A neurobiological account of control
We suggest that a range of neurobiological data and models of brain function (see Box 1 in published version) point to a way to sharpen the meaning of “control”. Our proposal has two parts. The first component is anatomical, specifying that the brain regions and pathways implicated in control are intact and that behavior is regulated by these mechanisms in a way consistent with prototypical cases of good control. So, for instance, if trauma or disease damages areas implicated in control – such as the fronto-basal-ganglia circuit (Aron et al., 2007) [33] and prefrontal cortex (Miller & Cohen, 2001) [34] – control will be impaired (Bechara et al., 1994, 1996; Damasio, 1994; Damasio et al., 1991; Fuster, 2008; Rushworth et al., 2004) [35-40]. The second component is physiological, and includes the molecular mechanisms whereby controlled is regulated. Even if the anatomical structures for control functions are intact, functionality requires that the levels of various neurochemicals – neurotransmitters, hormones, enzymes, and so on – are maintained normally. To a first approximation, what ‘normal’ means here will be determined experimentally by discovering links between uncontentious examples of control in behavior, and the neurobiological parameters in question; similarly for cases of impaired control. Roughly speaking, and granting individual variability, the normal range of the implicated neurochemicals (for a given species) is calibrated to the spectrum of values that the brain evolved to maintain in response to environmental demands typical of the species’ evolutionary past. Outside this range, control will be compromised. For instance, in addicts, a delayed return to baseline of corticotrophin releasing factor (CRF) levels, correlated with high levels of anxiety, appears to be a major factor in addicts’ recidivism (Koob, 2006; Koob & Le Moal, 2008) [41, 42]. Considering a different parameter, low serotonin levels are correlated with poor impulse inhibition, implying that this neurochemical plays an important role in control (Beitchman et al., 2006; Ferrari et al., 2005; Frankle et al., 2005; Nelson & Trainor, 2007) [43-46].

Is there a way to connect this neurobiological perspective on what constitutes being “out of control” with the sorts of “situational” factors cited in support of the Frail Control hypothesis? We suggest not. Normal levels of neurochemicals, and thus control, can be disrupted when external circumstances are, for instance, profoundly threatening. Great fear or shock can trigger a cascade of stress responses (including a rise in CRF, glucocorticoids, and the catecholamines epinephrine and norepinephrine (Koob, 2006; Koob & Le Moal, 2008; Lupien et al., 2007; Sorrells & Sapolsky, 2007) [41, 42, 47, 48] that may cripple control mechanisms. As a result, a captured spy may divulge secrets after “being shown the instruments of torture”, or a cuckolded husband may knife the disgraced pair in bed. These are the kinds of circumstances that courts regularly consider when asked to reduce penalties. Significantly, however, they are not the kind of mundane circumstances on which the Frail Control hypothesis relies.

To be clear, the account just sketched does not set the unreasonable standard that every relevant neurochemical must be at its ideal level or even within its normal range. Instead, the physiological requirement for being in control is defined in terms of a hyper-region in an n-dimensional “control space”. An important consequence of defining control in this way is that there will be many different combinations of neurochemical levels that fall within the “in control” hyper-region. As a result, a given neurochemical straying outside its normal range need not render a person “out of control”, assuming that other neurochemicals are within their normal ranges and that the deviation is not too extreme (click to expand Figure).

The role of nonconscious processes
According to a traditional framework – which, begging some forebearance, we will call neo-Kantian – consciousness is a paramount, and perhaps even necessary, condition for a decision’s being considered free. According to the neo-Kantian, consciousness must play a substantial role in most or all steps leading to a free decision: deliberating, choosing, intending, and acting. The interplay of reasons in deliberation must be transparent, since a reason must be conscious to be a reason at all; otherwise, it is a mere cause. Control, accordingly, is believed to be limited to those cases where most or all evidence, reasons, weighting of reasons, and so forth that contribute to a choice are consciously accessible. This transparency is central to the emphasis placed on consciously deliberated choice as a paradigmatic case of control in much of the philosophical literature, as well as the relation between “normative competence” and responsibility invoked by Doris (2006) [14, p. 136]. In keeping with the neo-Kantian perspective, the Frail Control hypothesis implicitly attaches enormous importance to whether the factors that play a role in an action can be consciously acknowledged as reasons. In our view, however, the general consciousness requirement for being “in control” is unrealistic. Exactly what role awareness of specific factors must play for an action to be considered controlled, relative to neurobiological criteria, is a matter not of stipulation, intuition, or semantics, but scientific discovery.

The exercise of skills is one domain where nonconscious processes are entirely consistent with – and even boost – successful control. Skilled responses are involved in just about everything we do, from driving, reading and gardening, to getting along with members of our community and finding our way home (Aarts & Dijksterhuis, 2000) [49]. Studies of skill acquisition, whether motor (Poldrack et al., 2005) [52] or cognitive (Fincham & Anderson, 2006) [53], indicate that in skilled or trained individuals, conscious attention is directed not to the intermediate steps, but to the larger aim and to unforeseen hazardous contingencies. Routine control can therefore be automatic (as evidenced, for instance, by increases in anterior cingulate cortex activity with practice (Fincham & Anderson, 2006) [53], while vigilant control can be directed to other things. Thus, habit and routine serve to spare the brain the energetic costs of close attention and to give the benefits of smooth operation (Berkman & Lieberman, 2009 [6]; Landau et al., 2004; Reichle et al., 2000; Sayala et al., 20060 [54-56], making nonconscious control of this sort a great energy- and face-saving device. Notably, cognitive, motor, and social skills, including those that underlie habit and routine, are often invoked in later explanations of actions and are certainly robust enough in their guidance of action to be considered genuine reasons.

Additional support for the value of automaticity comes from the hypothesis that (conscious) executive control is itself a somewhat limited resource. According to this view, known as the self-regulatory resource model, the amount of energy people have to expend on conscious self-regulation is limited, with the result that expending it on one task reduces the amount available for other tasks (Baumeister et al., 2007a, 2007b) [57, 58]. This suggests that nonconscious processes not only perform control functions of their own but may also help to ensure the efficacy of conscious mechanisms of control.

Furthermore, as evidenced by our ability to function while being bombarded by stimuli on a moment-to-moment basis, environmental factors – even if processed below the level of conscious awareness – do not flow straight through to trigger behavior. A reason for this is suggested by the model developed by Miller and Cohen (2001) [34], which proposes that the prefrontal cortex exerts control by sending bias signals that modulate activity in other brain areas. On this account, the totality of environmental influences – via automatic processes – clearly need not determine behavior. Even if, acting alone, environmental factors were to give rise to a pattern of activity divergent from a goal, the prefrontal cortex can, through bias signals, cause a goal-relevant pattern of activity to prevail instead.

In sum, although the idea that reasons and control can be (and often are) nonconscious is unacceptable to those who – even tacitly – accept the traditional, neo-Kantian view of action, it is consistent with the data.

Social psychology and control
The implications of the neurobiological account of control developed above for the interpretation of social psychological results can be summarized as follows: quite simply, most of the patterns of behavior described in the social psychology literature do not fall outside the realm of control (see also Bargh & Morsella, in press) [3]. The reason is that although the effects studied by social psychologists are mediated by situational factors and (often) by nonconscious processes, evidence indicates that the requirements for control set out above are typically met. First, the brain structures essential for control functions are intact (the anatomical condition). And second, the circumstances studied are usually within the typical range encountered in the evolutionary past of humans, and thus levels of various neurochemicals on which the proper functioning of the anatomical structures depends can reasonably be expected to be within their normal ranges (the physiological condition). Contrary to the claims of the Frail Control hypothesis, therefore, findings from social psychology should not be taken to motivate a substantial revision of our moral and legal practices of responsibility attribution.

The relationship between conscious and nonconscious control
To be clear, we are not advancing the radical thesis that there is no such thing as consciousness or conscious control. Our main point is rather that although consciousness – for instance of goals and what the neo-Kantian would call “reasons” – does sometimes have an important role in control, it is not required for control. Nonconscious control can be – and frequently is – exercised, and this control can be every bit as genuine as the conscious variety.

Given that the notion of nonconscious control is only beginning to gain traction in the scientific literature (Berkman & Lieberman, 2009) [6], we are not currently in a position to speculate about the precise interplay of conscious and nonconscious processes in controlled behavior or about the similarities and differences between the neurobiological substrates for these processes. Even so, it seems a safe bet that anatomical and physiological factors such as those discussed will figure in both conscious and nonconscious control in some fashion, and to some degree. The real work, however, will be in investigating the neurobiological details and teasing out how the anatomical and physiological factors underlying conscious and nonconscious control coincide and differ.

Moreover, while the extremes of conscious and nonconscious control may be fairly clear, the gradations and connections between cases are not yet major targets of research, let alone known with any certainty. However, by examining a range of cases between the extremes of conscious and nonconscious control and systematically varying the situational parameters, researchers may be able to illuminate the factors that prompt significant conscious involvement in the process of decision and action.

Conclusion
Recent challenges to the classical framework of control and responsibility are based on data from social psychology showing that minor external contingencies can play a significant role in behavior even when we are unaware that they do so. The data are taken to imply that control is rare and frail, and that the category of excuses from moral and legal responsibility should be modified accordingly. From our perspective, once these findings are placed alongside a broader range of data, a very different hypothesis is motivated: goal-maintenance and executive control are remarkably robust, and elements of control are often nonconscious. Neurobiological grounding for this alternative hypothesis is provided by findings that suggest a framework for control based on anatomical and physiological parameters. Accordingly, it is possible to model control as neural activity within a parameter space, where a region of the space characterizes values of various neurobiological parameters needed for executive control. So long as control-relevant anatomical structures are intact and the neurochemicals on which their functionality depends are within their appropriate ranges, sensitivity to situational contingencies and nonconscious processes are appropriate aspects of control and goal-directed behavior, not obstacles to them.

Acknowledgment and References

This post is a condensed version of Christopher L. Suhler and Patricia S. Churchland, “Control: conscious and otherwise,” Trends in Cognitive Sciences 13 (Aug 2009): 341-347. Numbers in brackets correspond to the article’s References list.

32 comments to Control: Conscious and Otherwise

I agree with much of what Suhler and Churchland (S&C) say, especially regarding their two main points: (1) that lack of conscious awareness of one’s reasons or intentions (at the time of action) need not entail that one is “out of control” such that one is no longer morally or legally responsible; and (2) that cognitive science and neuroscience can inform us about the neurobiological processes involved in being in control, and this will help us better understand how children develop control and how people lose control in a way that does mitigate moral and legal responsibility.

My disagreements with Suhler and Churchland have to do with the way they situate their target and thus the way they understand the potential challenge posed by results from situationist social psychology. They target the Frail Control hypothesis, which they associate with John Doris’ use of situational social psychology to challenge ‘neo-Kantian’ theories of freedom and responsibility. S&C claim that these theories emphasize the importance of conscious awareness at all stages of decision-making and action: “consciousness must play a substantial role in most or all steps leading to a free decision: deliberating, choosing, intending and acting.”

But philosophers who stress the importance of conscious reasoning for the type of control required for free and responsible action need not focus on conscious activity at the time of action. Rather, they can, and should, adopt a more Aristotelian (or athletic) model of action, one that emphasizes habituation and planning. We consider how to act in various circumstances (e.g., when people need help or when we’ll lecture our students or when our tennis opponent hits an approach shot) and we plan, and perhaps rehearse, how we will respond in such circumstances, ideally habituating ourselves to respond without having to consciously consider what to do at the time of action. This model is entirely consistent with many compatibilist theories of free will that emphasize the importance of regulating one’s actions in light of one’s reasons or values, but has no need to stipulate that this regulation always or often occurs consciously just prior to, or during, action. This model, of course, fits nicely with S&C’s model of nonconscious or automatic control, and it is amenable to evidence about how nonconscious processes can help us control our actions in light of our previous conscious reasoning and rehearsals.

But this model suggests that a more appropriate target for S&C would be those scientists, such as Daniel Wegner, Benjamin Libet, and perhaps John Bargh, who take their results to threaten free will precisely because they assume that conscious awareness of one’s intentions just before one acts is required for free action. If we allow that free and responsible actions are often performed without immediately prior conscious decisions or intentions, then these scientific challenges lose their bite. Consider that in the Libet experiment, subjects presumably consciously accepted the instructions which asked them to flex their wrist without (consciously) forming an intention to move them at a particular time but rather to let the urge to move come upon them. S&C hence offer a useful reminder that it is only if one assumes an overblown conception of free will or conscious will that one should worry about evidence suggesting that consciousness is too slow to make things happen within milliseconds.

However, the Aristotelian model of control (or a reasons-responsive model of free will) is subject to a different sort of challenge that S&C neglect. They tend to throw together automaticity research and situationist social psychology. But the threat from situationism does not derive primarily from the fact that automatic processes control much of our behavior. Nor does it derive from the fact that we can be primed to judge or act without awareness of why we are judging or acting in that way. Rather, the threat derives from the fact that we can be influenced to behave in ways that conflict with or betray what we take to be our reasons to act—or what we would take to be our reasons to act, were we to consider our reasons.

As I have argued, the problematic results in situationism are the ones that suggest we can be powerfully influenced to act in ways that we would not endorse were we aware of what we were doing or why were doing it (see Nahmias 2007; see also Nelkin 2005 and Vargas 2010). Situationist experiments show, for instance, that the presence of an impassive bystander leads people to sit impassively when someone needs help (bystander intervention effects); that being told one is late leads one to walk by a person slumped over and moaning (Good Samaritan study); that gentle prods by a scientist lead one to shock a stranger until one thinks they are unconscious or dead (Milgram experiment); that pleasant smells lead one to help someone you would not help without the smell; and so on. These are the sorts of effects captured in classic situationist experiments (and captured on film in the fascinating show “What Would You Do?”). The problem is not the mere influence of nonconscious processes but the “content” of those processes which lead us to act against what we take to be good reasons to act.

It is plausible that people do not know about the influence of these effects, since when told about them, they reject the idea that they were influenced by them. But it is also plausible that people do not endorse the influence of these effects, since they typically protest that they were not influenced by them and do not think anyone should be influenced by them. Worse, people will often make up reasons to fit their situationally-induced behavior, reasons that fit neither the likely causes of their behavior nor fit the sorts of reasons they offer when they did not just behave that way. The threat from situationism is not the threat of automaticity—as we’ve seen, automaticity is often a wonderful thing—it’s the threat of rationalization, being influenced to act in ways that conflict with one’s better judgment and often being led to then rationalize one’s behavior to make sense of it to oneself or others.

Now, one might argue that nonconscious processes are leading us to act effectively in many of these situations, that we’re somehow better off being controlled by them than by any conscious, rational processes we might engage in those situations. I’ve agreed with S&C that automatic processes often get things right, but surely that can “mis-guide” us, even when the behavior falls “within the typical range encountered in the evolutionary past of humans, and thus levels of various neurochemicals on which the proper functioning of the anatomical structures depends can reasonably be expected to be within their normal ranges (the physiological condition).” This is because nonconscious processes are ballistic and somewhat inflexible—for instance, they will guide us to follow the behavioral cues of the people around us, since that is often a fast and frugal heuristic for effective behavior. But this very heuristic is what so often leads us to behave in ways that we would (and should) reject on further (conscious) reflection. Sometimes the group (or the authority figure) leads us astray, and we need to be able to resist their power. Sometimes it will take conscious efforts to resist our automatic tendencies, or it might take efforts by ourselves, or our parents or teachers, to habituate us so that our more reasonable automatic tendencies win out over our less reasonable ones.

So, on the one hand, we need to get over the increasingly common mantra, “If my brain did it, then I’m not responsible” (see recent book title My Brain Made Me Do It and recent blog post: “Who’s in Charge Here: Me or My Brain?”). Rather, as S&C suggest, we need to recognize that we are our brains, or better, that the overwhelmingly complex activity of our brains (embodied in the world) somehow gives rise to our mental life, both conscious and non-conscious.

On the other hand, we need to recognize that, because so much of our cognitive and emotional processing occurs non-consciously, there are numerous ways we can be influenced to act that conflict with our consciously accepted reasons and goals. I do not take these situationist threats to be pervasive (e.g., to the extent that Doris does). However, I do take them to pose a significant challenge to the degree to which—and the contexts in which—we have the sort of control required for free and responsible behavior.

In correctly defending the role of nonconscious control, and in challenging a (straw man?) philosophical theory that says conscious processes must be engaged at every moment of decision-making and action, S&C thereby neglect the potential challenge of situationism to a more plausible theory, one that entails that we lack an important type of control when we are led to act, without our knowledge, against our better judgment. Advertisers know how to manipulate this feature of our psychology well. We gain more freedom and control when we learn (consciously) how to gain (perhaps nonconscious) control such that we act on reasons that we have accepted in planning our future behavior, or at least that we would reasonably endorse and not just rationalize after the (f)act.

I would like to begin by thanking Gary Comstock for inviting me to participate in this discussion. I would also like to thank Patricia Churchland and Chris Suhler for contributing such a thought provoking post. As someone who thoroughly enjoyed their earlier paper, I am delighted to play along.

That being said, I now want to briefly explore some of the worries and problems I had with Churchland and Suhler’s intriguing ideas concerning the relationship between control, agency, and responsibility. For starters, they define the so-called “Frail Control Hypothesis” (FCH) as the thesis that “even in unexceptional conditions, humans have little control over their behavior.” Then, they go on to argue that because non-conscious control is both common-place and impressively sophisticated, FCH is uncompelling. But I think they only arrive at this conclusion because they do not accurately capture the worry expressed by FCH.

On my reading of FCH, the issue is whether the kind of robust moral agency we have traditionally taken ourselves to have is undermined (even partially) by what we are learning from the situationist literature concerning the pervasive and sometimes surprising influence that morally irrelevant environmental stimuli—e.g., dimes in phone booths or leaf blowers on campus lawns—can have on our moral behavior via a series of non-conscious mental processes and events. In short, if we associate the moral agent with the conscious self, and if it turns out that the conscious self has far less control over human behavior than we traditionally assumed, then it is unclear why our notion of moral agency shouldn’t shrink accordingly. The worry that underlies FCH is not whether the conscious will is causally inert—although some researchers do adopt a more radical line on this front (e.g., Wegner’s “illusion of conscious control” and Libet’s non-volitional “veto power”). Instead, the claim is simply that to the extent that both the automaticity literature—which I am not going to address here—and the situationist literature suggest a more limited role for conscious control than we have traditionally assumed, we ought to thereby revise our notion of moral agency.

I take it that’s why people are so surprised by the data on situationism. No one thinks that whether they decide to help an elderly in need could be greatly influenced by finding a dime in a phone booth or having a leaf blower making noise nearby. Yet, the situationist literature is littered with these kinds of results. On my view, the fact that dime-finding (or the lack thereof) and other morally irrelevant stimuli can sometimes influence our moral behavior should minimally give us some pause when it comes to our moral deliberations and judgments. As such, I am presently unconvinced that Churchland and Suhler’s otherwise exciting work on the causal relationship between non-conscious control and agency successfully allays the worries raised by Doris and others specifically about moral agency. I am nevertheless excited to see the fruits of their continued labor on this front since I am confident they will continue to shed additional light.

I agree with Nahmias and Nadelhoffer that Churchland and Suhler need more directly to address examples in which people act quite badly because of situational factors: as in the Milgram experiments and Zimbardo’s prison experiment and the extent to which these situational factors are relevant to how we should think of Nazi treatment of Jews and others or the conditions in Abu Graib and other prisons.

Suhler and Churchland outline a conception of control that answers to what psychologists call controlled processes, where a process is controlled (roughly) if it is goal-directed and buffered against perturbations. But philosophers are usually interested in what has been called freedom-level control; the conception of control offered is not sufficient for this kind of control. Freedom-level control does require consciousness.

Specifically, freedom-level control requires access consciousness; the availability of information to a wide variety of consuming systems (since access-consciousness is highly correlated with phenomenal consciousness, freedom-level control will almost invariably be accompanied by phenomenal consciousness as well). Either immediately preceding the action for which the agent is supposed to be responsible, or in the learning history of the agent (in the case of habitual actions), the agent must be access conscious of the goals of their actions. When we lack access consciousness, the goals of our behaviour may sometimes be at variance from those we reflectively endorse; under these conditions, the action is not fully expressive of the agent’s practical identity.

Why is access consciousness necessary for actions to be properly expressive of the agent? Information is access conscious if it is available for rational control; if it is simultaneously accessible to the decision-making, planning and volitional centres. Consuming systems are relatively modular and possess few links to one another; only through the conduit of the global workspace does information became available for the rational control of thought and behaviour.

The global workspace allows all the mechanisms constitutive of the agent, personal and subpersonal, conscious and unconscious, to contribute to the process of decision-making. Hence conscious deliberation is properly reflective of the entire person, including her consciously endorsed values. Whereas, in the absence of consciousness, my decisions reflect only some subset of the subpersonal mechanisms that constitute me, when I deliberate consciously the resulting decision really expresses my real self. There may indeed be a good sense in which actions that are neither initiated nor monitored by conscious processes are controlled, but they are controlled only by a subset of the subpersonal mechanisms constitutive of the agent, and therefore may fail to be properly expressive of who the agent is. Most importantly, subpersonal mechanisms may cause actions that conflict with those the agent would consciously choose: when, for instance, they cause the agent to adopt a goal that reflects racist or sexist associations almost inevitable in many contemporary societies, but which the agent may consciously reject.

Only when agents are access consciousness of the goal of their actions are these actions expressive of the person as a whole. But being expressive of the agent requires access consciousness, either at the time of the action, or (in the case of habitual actions) in the learning history of the agent. Only under these conditions is the action fully reflective of all the relatively modular systems constitutive of the agent. Full-blown moral responsibility does, therefore, require access consciousness. Only under these conditions are the actions controlled by the person, and not merely by some set of subpersonal mechanisms. I conclude that though there may be such a thing as nonconscious control, it does not underwrite attributions of moral responsibility.

Both Nahmias and Nadelhoffer make the important point that the challenges raised by the situationist literature have more to do with the “content” (as Nahmias puts it) of the processes that lead us to act than with the fact that such processes are largely/often non-conscious, and I agree with them that Churchland and Suhler’s post does not address those challenges sufficiently. However, Churchland and Suhler’s post may furnish the materials with which to begin to address this issue; whether it does, I think, depends in part on how we should understand the following passage from C&S:

“In keeping with the neo-Kantian perspective, the Frail Control hypothesis implicitly attaches enormous importance to whether the factors that play a role in an action can be consciously acknowledged as reasons. In our view, however, the general consciousness requirement for being “in control” is unrealistic. Exactly what role awareness of specific factors must play for an action to be considered controlled, relative to neurobiological criteria, is a matter not of stipulation, intuition, or semantics, but scientific discovery.”

Is the crucial issue consciously acknowledging a reason, or is it consciously acknowledging something as a reason? The distinction is important, and I take Nahmias and Nadelhoffer to be saying that the fact that an agent would not count certain factors as good reasons for acting, factors which, it turns out, are relevant to behavior, is important to our assessments of control and responsibility. That is, they both concede that non-conscious processes may not, by themselves, threaten control and responsibility, but the more important question is whether the factors to which these non-conscious processes are sensitive conflict with what an agent would consciously declare to be her reasons (or what she would count as good reasons).

Perhaps, however, we should read C&S as also giving (suggesting?) the more provocative argument that, in a great many cases, an agent can be in control even when factors relevant to behavior are not of the sort that the agent would consciously count as being (her) reasons (or count as being good reasons). C&S write “Notably, cognitive, motor, and social skills, including those that underlie habit and routine, are often invoked in later explanations of actions and are certainly robust enough in their guidance of action to be considered genuine reasons.” Are the explanations here the kind that the agent herself would acknowledge, if asked? It is not clear that C&S require that they be so, and in fact their earlier reference to the social behavior of non-human animals suggests that creatures can act for reasons–and be in control–even when those reasons could not be the object of conscious reflection and evaluation.

To the extent that we grant that a creature can be in control and act for reasons, in the face of a gap between what those reasons are and what the creature could or would consciously acknowledge, to that extent we put pressure on our ordinary, conscious assessments of the factors which are (or should be) relevant to our decisions and behavior. This is not to say, of course, that we should automatically count as good reasons those factors which do not pass various consciously conducted “tests” for good reasons, and I don’t think C&S are suggesting this, either. Rather, the point seems to be that if we take seriously the model of control offered by C&S, it may turn out that many of the situational factors implicated in behavior do not threaten control to the extent to which they are often thought. Not because, simply, that non-conscious processes are consistent with control, but also because conscious assessments of the relevance of situational factors may (often?) fail to reflect features to which non-conscious processes depend, processes that are indispensable to living successfully.

Christopher Suhler and Patricia Churchland’s work on control beautifully illustrates how neuroscientific details might be used to she light on philosophical questions. They have done a service by drawing philosophers’ attention to the important empirical research on control, which clearly bears on questions of freedom and responsibility. More specifically, they have argued that such research can be used to deflect the “Frail Control” hypothesis, according to which human beings are subject to influences that force us to revise our traditional conceptions of responsibility. To illustrate the Frail Control hypothesis, they single out the Situationist challenge, according to which human behavior is so prone to extraneous situational influence that traditional accounts of how we act (the idea that decisions are made on the basis of character traits, moral values, and careful deliberation) must be revised (Doris, Harman). Suhler and Churchland reject this claim, and in these brief remarks I’ll try to explain why I think the Situationist challenge may be more resilient than they realize. I will echo many of the other comments (especially Nahmias and Nadelhoffer).

I can identify three potential lines of reply to the Situationist challenge in Suhler and Churchland’s target post. First, they point out that humans and other mammals ordinarily exercise extraordinary control and need to in order to survive. This, though, is no response, for the types of control furnished by evolution (delaying gratification, carrying out a multistep task, slowing acquiring a skill, etc.) are no safeguard against situational influence. Each form of control, in fact, played a central role in historic atrocities such as the holocaust, yet it is in precisely these contexts where Situationist warn that human decision-making is vulnerable to extraneous influence—a charismatic leader, social pressure, scapegoating during crisis, bias against an “out-group” within, etc. Situationists argue that, under such pressures, human agency can be compromised, and, in come cases, responsibility is collective or institutional rather than (solely) individual.

Second Suhler and Churchland try to rebut the Situationist challenge by suggesting that promoters of that challenge have implicitly and erroneously committed to the claim that control must be conscious. On this reading, the Situationist challenge says the prevalence of unconscious influences on behavior entails that control is frail. Against this Suhler and Churchland claim that lack of consciousness does not entail absence of control (as in the case of carefully learned skills). This issue of consciousness strikes me as a red herring. The Situationists challenge would arise even if we had full consciousness of the situational factors affecting us. For example, participants in Stanley Milgram’s infamous inflicted potentially lethal electrical shocks on an innocent stranger simply because they were asked to by a man in a lab coat. They may have been conscious of the fact that they were doing this because an authority told them to. The Situationist challenge here makes no appeal to unconscious influence. It says, simply that situational influence may outweigh stored moral values, enduring character traits, careful deliberation and other internal sources of motivation, and thus we must revise theories of responsibility that assume decisions are primarily driven from within. Should we blame the participants for their behavior, or work to avoid situations under which such behaviors are likely to be arise given the way human psychology works?

Third, and most constructively, Suhler and Churchland sketch a neurobiological account of control and argue that the brain structures and chemicals associated with control are probably working normally in cases of alleged situational influence. Thus, people who are swayed by circumstances (like participants in Milgram’s study) have control and they can be held responsible in the ordinary sense. Here I will not take issue with the neurobiological details, though I do think that Suhler and Churchland have made the notion of control so encompassing that they will not be able to provide a unified biological account and the construct will be vulnerable to scientific elimination. The deeper problem with this reply is that it only serves to underscore why the Situationist challenge is so disturbing. If healthy brains acting under normal circumstances can be radically influenced by small situational factors (a lab coat, finding a dime, running late for a meeting, smelling cookies, holding a warm beverage, words flashed on a screen, sitting in a hot room, etc.), then situational influence is pervasive and difficult to avoid. Thus, if neuroscience confirms that we have control when such factors influence our behavior, then it will turn out that control is not enough to prevent us from being manipulated like marionettes by external variables. On this rendition of the Situationist challenge, we are, for example, wired to choose blind obedience. If so, our practice of punishing perpetrators may turn out to be neither the most efficacious nor the most just practice of doling out blame.

To rebut the Situationist challenge, Suhler and Churchland would need to show that the kind of control found in the brain can make us less vulnerable to situational influence. Perhaps their research will ultimately establish that, but for now, the court is still out. The problem is not that we have Frail Control, but that the strong control we have is a frail resource for insolating ourselves from external circumstances that can lead to bad decisions.

My worry is the asserting of a given goal-oriented individual. The situationists’ insights may disrupt how we think we come to form goals in the first place.

Eddy Nahmias, above, points to situations where goals or actions seem to arise in the course of other actions or be thoroughly altered or redirected in certain situations, such as the example of forming a goal or response as to how to treat a bystander, something that we had not formed an intention towards before seeing. Suhler and Churchland are right to point out the power and control our brains have to carry out preset, given goals, that is, our ability to overcome later situational effects if we have firmly set a goal. The problematic for me is how the “situations,” the environment and biology of an individual, determines the goals in the place. Situationalist effects give (further) worry as to exactly how and why we chose the goals we did in the first place, and this problem has responsibility and legal questions in itself.

Thank you very much to Suhler and Churchland for their thoughtful and helpful paper. At least it is a start toward rehabilitating the somewhat-maligned (in some of the experimental philosophy literature) “control” based approaches to moral responsibility, especially those that seek to define the relevant sort of control in terms (roughly) of “reasons-responsiveness”. Suhler and Churchland show, helpfully, that an important kind of control–involving responsiveness to environmental inputs or cues–is consistent with the relevant scientific data.

I also agree with Eddy Nahmias (and others, such as Jesse Prinz), that is it only a start, and that it is still problematic that we seem to be inaccurate or unreliable detectors of what really motivates us in our behavior. As a proponent of a reasons-responsiveness approach to control and moral responsibility, this does worry me; I have for awhile been troubled by the data, as described by such philosophers as Doris, Harman, and many others. Frankly, I’m not sure what to make of it. I do think that we reasons-responsivenss theorists need (at some point) to address it or make sense of it in light of our approach.

If (and this is a big if) the data really do show that we are systemtatically terrible and unreliable motivation-detectors, then I think this is a big problem for reasons-responsiveness approach to control and moral responsibility. I’m not saying “insuperable”, but “big.” But do the data really establish something this strong? When, for example, I’m driving along and I see a motorist stranded by the side of the road and I stop, am I really wrong about my motives/reasons? Aren’t many at least situations pretty clear like this?

And if in the end what the data really show is that we are imperfect, and perhaps frequently wrong, in our views about what motivates us, what exactly would that show? I have no doubt that we are somewhat unreliable in this regard. (Various theorists, Freud pops to mind [no doubt, that’s “significant”!], have been making this sort of argument.) One thing it might show is that we should educate people and make people aware of the pitfalls of certain sorts of situational factors, so that we can seek to avoid undue influence by them. And this would seem entirely compatible with also seeking to design institutions and social structures so as to minimize the effect of such factors. (I don’t see why we couldn’t do both, and it would seem both would be arguably desirable.)

I take the situationist literature, or some of its purported upshots, to be genuine and important challenges. Frankly, I am not competent to interpret or evaluate it–to ascertain how strong the conclusions really are, and how well-supported the strong conclusions are. If the conclusions are less strong than have been advertised, it is not clear that the prospectus for reasons-responsiveness approaches are not bright.

Well, I should have been a bit more careful in thanking Suhler and Churchland for their philosophical assistance. They DO help to rehabilitate “control-based” approaches, but not “reasons-responsiveness” approaches, at least given that reasons must be conscious. I’m not sure that I accept this assumption, but I should acknowledge that S&C do not consider themselves in the “neo-Kantian” camp in this specific regard.

I’d like to thank Gary Comstock for inviting me to contribute to this discussion. As a social psychologist, I hope to add a somewhat different perspective here. The upshot from the stimulating paper by Suhler and Churchland is that we have more control than we would think, even if it’s not of the sort that fits with our lay theories. It is worth noting that the threat to control emphasized in the frail control hypothesis is a bit overstated; though I won’t go into detail here, there are a bevy of new models in what has been called the “second generation of automaticity research” which challenge the predominance of automaticity over controlled processes. With that in mind, and in agreement with Nahmias and Nadelhoffer, I think that the real challenge from social psychology is not to self-control, but instead to self-knowledge. Social psychological research threatens to undermine our reasons for action as illegitimate in the sense that we care about them as reasons (namely, as a means to explain and justify what we do). Moreover, there are ample and efficient motivated reasoning processes that prevent us from ever noticing this normative difficulty to begin with, further amplifying the debunking power of the social psychological data.

I think there are two distinct problems: 1) the unawareness of being influenced by environmental stimuli or non-conscious cognitive processes and 2) the motivation to reject the presence of such influences. Neither problem is addressed by positing non-conscious control processes. The first problem threatens to debunk specific reasons I might provide for a given action. And the second problem, in showing how Western lay theories about action and moral behavior motivate blindness toward such influences, threatens to debunk the value of the theories themselves.

If I help someone pick up her dropped papers because I smell a bakery nearby, I am unlikely (even with social psychological training) to cite that cookie smell as the source of my helping behavior. I would not find this reason normatively acceptable, and honestly would feel a bit silly in saying it was. More strongly, if I have implicit race biases that skew my grading decisions in an unfair direction, I might confabulate some “objective” reason for any noticeable grading disparities. In both cases—and especially in the eyes of others who are morally appraising my behavior with all of this information provided—such reasons become suspect.

In many cases, we can become aware of unconscious influences, and partially correct for them (for instance, in correcting how transient emotions influence moral decision-making). And non-conscious control can help with correction as well. Sure, in some situations – e.g. under time pressure or cognitive depletion – we still cannot do much. But noticing the influence is the critical first step. The more pernicious problem presented by social psychological theory is that our minds have many methods for ensuring that we never notice these influences in the first place. And if someone does make us notice, we are liable to shoot the messenger.

As suggested above, Western folk theories about mind and self do not tend to incorporate non-conscious influences in first-person explanation and justification. Nor do they include non-conscious control processes, for that matter. Rather, people cling to the “myth of the ideal agent” described by Dan Wegner and Dan Dennett, the access conscious sense of a freely willing, rational self. This myth is self-perpetuating, ensuring that people discredit unconscious influences as irrelevant to what they do. Cognitive dissonance-reducing processes are one means by which this consistent, rational self-image is maintained. Introspective limits are another. Emily Pronin and Lee Ross have done fascinating work on “naïve realism”, the tendency to assume that one’s introspection provides unmediated access to reality. Not only does this assumption cast unconscious influence as unrealistic, it also makes people liable to get quite hostile toward anyone who disagrees with their “objective” appraisal. Anybody who disagrees is cast as biased and irrational, with introspection corrupted by all sorts of nasty (even unconscious) influences. But the minute such imputations of unconscious influence are thrown at the self, defensive motivations skyrocket.

This naive realism presents what Paul Davies has called a “double difficulty” for self-knowledge: not only are people unaware of unconscious influences on behavior, but they also are motivated to stay that way. Non-conscious control won’t resolve the normative concern here; the issue of influence unawareness still exists, and is further clouded by tendencies toward naive realism about conscious will. People think they have adequate reasons and self-knowledge, even when they might be ultimately unjustified. And trying to convince people otherwise is quite difficult, because motivated reasoning is pitted against accepting the reality of unconscious influence. Suhler and Churchland are right that moral theory probably won’t change in light of social psychological data; but that will probably be due to the motivated resistance to this data, and not to any lack of threat to self-knowledge. Maybe there is some adaptive, social reason for this breakdown of self-knowledge to perpetuate itself on a large scale (as Roth hints at). But ultimately, this is a problem that non-conscious control is not equipped to solve, and one that leaves many a jaded psychologist banging his or her head against the wall when it comes to legal, moral, and policy implications.

This was a very fascinating essay to read, and I hope to see far more investigation into issues in free will (and control, moral responsibility, etc.) emerge in a similar fashion – i.e. with more attention paid to neurobiology, psychology, and science in general.

Certainly this essay is not an exhaustive or super-in-depth argument for control, and how that fits into the free will debate. But I would like to see more said about a couple of issues. First, the idea of nonconscious control is intriguing, and I am very sympathetic to such a view. But surely not *all* nonconscious control mechanisms are the kind which remain important for issues of free will – even if the brain/body is in impeccable health. Evolutionary processes have provided organisms with a vast array of autonomic *control* mechanisms, which, for instance, control our heart rate, body temperature, or digest food. Such mechanisms are nonconscious, we can become conscious of certain of their results, and they can be influenced by environmental (or situational) interactions. But surely these kinds of nonconscious control mechanisms are not the kind that remain important for free will or moral responsibility. So, I would like to see more said about the degree to which the two (control constitutive of free will and control mechanisms in general) vary, and what exactly would end up constituting control that would allow for free will and moral responsibility.
Secondly, I am intrigued as to whether conscious decisions would carry any more efficacy on this view when contrasted with nonconscious decisions (or conscious vs. nonconscious control for that matter). Most agent-causalists (among others) discuss *conscious* control, because they are concerned with agents, rather than the mere biological mechanisms. And, although this view seems not to concern itself with “agents” per se, when we start talking about what exactly has control, it seems to creep back into the argument. If we are talking only about biological mechanisms and nothing else, then my first concern seems to creep up again – i.e. which types of mechanisms constitute the appropriate type of control for FW and MR? Autonomic mechanisms? Only decision-based mechanisms? Higher-level brain functions? So, if we are discussing mechanisms for control, then what line is drawn to separate any mere control mechanisms from mechanisms which allow for the types of control which remain pertinent to the FW/MR debate? It seems that Situationist arguments pertain mostly to a view of *agents*, however, rather than the mere mechanisms which drive the agent (or system) along.

So, I have no real problems with the essay, method, or future route of the project – in fact I am very sympathetic and completely supportive of, and excited about, it all – but I would like to hear more about similar concerns (like those above), which remain pertinent, I think, to the arguments for and against FW & MR.

In Jorgen Hansen’s nice comment above, he says that he’d like to see investigation into free will, control, and moral responsibility pay “more attention to neurobiology, psychology, and science in general.” Fair enough. But please allow me to express the thought that I’d like to see discussions in neuroscience about free will pay more attention to the philosophical discussions! It would seem that we would all benefit from a more nuanced, sophisticated, and genuinely interdisciplinary understanding.

Frankly, I could do much better on this score than I have in the past.

But equally frankly, I have often been floored by the lack of sophistication about philosophical disucssions of free will–the range of (say) compatibilist options–in the neuroscience literature (which, admittedly, I haven’t read enough of to make such pronouncements with much authority).

John, I couldn’t agree more. I suppose I should have added “and vice versa” to my comment. Now that neuroscientists are more and more offering comments (or arguments) on the issue of free will, it would be nice to see more convincing arguments. After all, no neuroscientist will ever take seriously a philosophical argument which pertains to, say, functional neuroanatomy, unless the argument displays quite a bit of understanding of both the philosophical and scientific ends. Patricia Churchland has bridged this interdisciplinary gap fabulously in her works, by adequately understanding both ends of the argument. So, as this interdisciplinary intrigue continues to grow, I would like to see more philosophers take the effort to understand neuroscience in more depth (which we have certainly seen), and I would also like to see neuroscientists take the effort to better understand the depth of the philosophical arguments that they wish to throw their two cents into (which we haven’t seen as much, I don’t think).

Here’s a question I’ve been wondering about for awhile–maybe some of you have thoughts. In any case, I thought I’d post it for your consideration.

Theorists from various disciplines have for a very long time contended that our introspective reports (as it were) of our motivations can be unreliable and often inaccurate. Freud of course contended this. But various others have made similar moves. So I say that I helped that stranded motorist because he or she needed the help, I wanted to be of assistance, and so forth. But the psychological egoist insists that my “real reason” or “deep reason” is that I wanted to be seen as helpful, which of course is in my best interest. And so on.

Obviously, the Situationist Challenge is, although structurally similar to these traditional skeptical challenges to our reliability or transparency, different in some ways. For example, Situationism purports to be empirically grounded, and so forth.

A question I have is why we shouldn’t just admit that our “real reasons” might be very different than we suppose. Further, it might be that I’m not at some level of anlaysis or penetration “really” altruistic–maybe I’m egoistic at the deep level. And maybe I only helped someone because I had recently found a dime in phone, and so forth. And, my more basic question is: why shouldn’t our responsiblity practices link with our relatively superficial space of reasons, rather than the (puative) deep space of reasons?

To elaborate (slightly). I’m happy to link moral responsibility to the space of reasons that are relatively accessible to me, such as that I want to help people in distress, they need me, and so forth. And if you (or my psychotherapist, if I had one) told me that my deeper reason was selfish, I might say, “Fine”, but still I might think of moral responsibility practices as linked to the relatively superficial space of reasons. For other purposes–ascertaining who the “real me” is, figuring out stuff like “authenticity”, and so forth, perhaps we need to penetrate to the deep space of reasons; but for responsibility practices, the superficial level is where we operate, as it were.

Why not say something similar about the challenge from Situationism?

Caveat: I realize I’m just “philosophizing in space”, like a quarterback forced out of the pocket, and there are many distinctions I have elided, and so forth. But what do you think?

John, I think that you make a very insightful point–it might be that the social practices of reasons-giving rely upon more superficial, consciously accessible reasons. There might be some adaptive social function for the kinds of superficial reasons we do provide–the kinds of reasons that locate us as rational social agents, instead of passive pinballs in a game of unconscious influence. There could be some important motivational function here; conceptualizing ourselves and others as rational agents (which is presumably the content of the kinds of superficial reasons which are accessible to consciousness) might push us toward greater social cooperation and fair treatment of others, etc. I’m thinking here of some of David Velleman’s writings on how effective social interaction depends upon the coherent narratives we construct about our actions and those of others.

On Velleman’s account, these narratives have to fit with a larger, socially validated conceptual scheme about what it means for an action to make sense. The kinds of superficial reasons that we draw upon in explaining our behaviors might fit this bill: they are drawn from what might be seen as a culturally situated library of acceptable reasons for action. This account could also help us to understand people’s motivated resistance to challenges to these superficial reasons. If I challenge someone’s superficial reason for action, or myself provide a normatively unacceptable reason for action (e.g. I helped because I found a dime in a phone booth), I might be seen as not playing by the rules. As Velleman puts it, I will be seen as having stepped outside the socially validated rules for reasons explanation, and my status as a fellow social interactant will be called into question.

In addition to there being a socially adaptive reason for the kinds of content that our superficial reasons contain, there might also be an adaptive reason for why “deep” reasons are inaccessible to consciousness. Tim Wilson has speculated in some of his writings that the workings of the “psychological immune system” (e.g. cognitive dissonance reduction, motivated self-perception, and others kinds of self-serving cognitive biases that help us get through otherwise negative, challenging events) are better off being inaccessible to conscious awareness. He’s speculated that peering into the workings of the psychological immune system threaten to disrupt its function. So maybe it’s important that we maintain some unawareness of unconscious influences, in order to keep up the social game of superficial reasons-giving. And even if we do notice these unconscious causal influences, maybe it’s best that we keep them from becoming licensed as normatively acceptable reasons for action.

That being said, the naive realism that I described in my earlier post has been posited as perhaps the driving force behind intractable intergroup conflicts and disagreements. Even if there is some socially adaptive reason for “superficial reasons” to cover up the reality of unconscious influence, it still seems that there are some truly disastrous negative side effects.

John, I have been thinking about some issues similar to those you raised above. One potential problem I’ve encountered with attributing moral responsibility specifically to the sphere of “superficial” reasoning arises from considering certain psychological disorders. The ever-famous kleptomaniac wants to steal, has reasons to steal, desires to steal (etc.), but many people tend not to say that the kleptomaniac is morally responsible because those reasons/desires/etc. are of the most superficial nature. Sure, she might have reasons for stealing, but deep down, she has real psychological problems which cause such desires/reasons and so we don’t pay as much mind to her superficial reasons as we do to her deep self. This could potentially carry over to other, more normal, cases: e.g. where I help somebody because I superficially want to be altruistic, but I may in fact be doing so for egoistic reasons at a deeper level (although, I am not very convinced that we are all selfish at heart).

John,
Moderate reasons responsiveness, as you have elaborated it, requires responsiveness to be patterned; situationism suggests that there are no patterns of the kind you want. More generally, suppose I A, thinking that by A-ing I assist you. But suppose I wouldn’t A were A-ing not in my interests. Since in all worlds in which A-ing is not in my interests, I don’t A, it seems a stretch to say that I deserve praise for A-ing.

On volition, Aristotle had it right.
Imagine a philosopher, as it might be, driving to the university on a cold morning, without adequate sleep and in an irritable mood. Pulling out of the driveway he (or she) sees a toddler playing in the street. But unconsciously (or consciously) the philosopher hasn’t had his coffee for the morning, and instead of hitting the brakes, s/he hits the accelerator and kills the child. Question: Is Frail Control going to be either an ethical or a legal defense?

Of course not. Studies of fatal human errors in traffic, in industrial errors, in aircraft accidents place the blame squarely on the human (aided frequently by a conspiracy of bad circumstances). Airliner pilots in such accidents never fly again. Drivers are sometimes allowed to drive again, perhaps unfortunately. And those who can be shown to act with disregard of human life, or with deliberate and murderous malice, are commonly punished.

The result is that we all fear to act irresponsibly, and we monitor our own driving for fear of hurting others. That is both a conscious fear, and it is naturally a self-training routine also, so that our moment-to-moment conscious imaginings are turned into automatic “policies” that allow our basal ganglia to guide consciously what was once unconscious. The neurobiology of voluntary control is now known with exquisite precision. It looks far more like Aristotle than like a post-modern fantasy world. No one should be surprised.

The issue of control is central to understanding of human nature and how the mind works. Suhler and Churchland have provided a key insight: Control is a mixture of conscious and unconscious processes. Colleagues and I recently reviewed the experimental literature on consciousness and behavior. We found plenty of strong evidence that conscious thoughts cause behavior — but we found zero evidence that any behavior is controlled by exclusively conscious processes (Baumeister, Masicampo, & Vohs, in press). The possibility of purely unconscious behavior is also slight, unless one counts as behavior such things as digestion, breathing, and blinking. Hence we suspect that every human behavior is a product of both conscious and unconscious processes.

Indeed, the proximal cause of any motor behavior is nerve cell firings, and these are invariably unconscious. Conscious control therefore inevitably works by means of unconscious processes.
Conscious control is therefore, effectively, remote control. As many researchers have been saying, behavior in the here and now is mainly executed by automatic processes, most of which occur outside of consciousness. Yet this does not rule out conscious control by indirect means. In the heat of the moment, the automatic mind turns to its programming to know how to act. But conscious thoughts (at other times) can alter that programming. For example, after a failed or regretted action, people may reflect on what they could have done differently, and these reflections contribute to behaving differently the next time a similar situation arises.

Regarding Frail Control: Doris is correct that situational causes exert an influence over behavior. But large effects are rare. Most effects just shift the odds slightly. Large effects depend on everything else being carefully screened out and on conscious attention being systematically managed so as not to interfere (and often to cooperate). The situational effects of social psychology allow plenty of room for conscious control.

The intuitive assumption may have been that all control is conscious, with unconscious processes occasionally intruding to alter responses (such as Freudian slips). The reality is more likely the other way around. Most behavior is directly controlled by unconscious processes, with consciousness sometimes intruding to override or alter responses.

A key feature of consciousness is unity: A person consciously thinks one thing at a time (including combinations). Why is this necessary? A solitary creature who dealt only with the physical environment might not need either a unified self, or indeed might not need any conscious thought beyond the basic animal’s awareness of its surroundings. But human social life depends on conscious thought. Moral responsibility, for example, is judged by peers as an action of the whole person acting as a unity. A misdeed leads to punishment of a whole person, not punishment of an unconscious subroutine.

Legal and moral judgments often hinge on the question of premeditation. Actions in the moment may be impulsive and even mostly unconscious, but premeditation is by definition conscious. It shows how consciousness works. Knowledge relevant to a possible action is scattered across diverse sites in brain and mind. Mentally simulating an action in advance enables all these sites to chime in with suggestions, refinements, and objections. In that case, the action can fairly be said to reflect the whole person. In contrast, acting without thinking could be a case of one stimulus activating one brain and mind site to elicit a quick response, so that the action is not informed by some of the person’s values and experience.

Bernie, am I right that you meant to type: “…our moment-to-moment conscious imaginings are turned into automatic ‘policies’ that allow our basal ganglia to guide unconsciously what was once conscious“?

My fingers took off all by themselves. I’m not responsible for my basal ganglia, but I AM responsible for scaring the daylights out of my basal ganglia when they go off the wrong way. That’s me, my cortex and I.

One of the interesting biological oddities is that in humans, at least, cortex (and thalamus) are really tightly coupled with conscious and voluntary processes. It’s sometimes argued to be less so for cats and such, but all the evidence I know points straight to cortex. (It’s the thalamocortical system, really.)

Because consciousness is ancient evolutionarily (at least 200 million years for mammals alone), it is very likely that different layers of the brain supported conscious functions at different stages of phylogeny. One possibility is that the human cortex, as it ballooned to occupy perhaps 80 percent of our crania, also acquired the ability to inhibit prior ‘seats’ of consciousness. You need to do something like that over evolution to avoid crossed and self-defeating signals and output commands. I believe that visual cortex inhibits the superior colliculus, for example, and the basal ganglia have lots of inhibitory connections.

So one possibility is that we are talking about a multi-storied building, but that evolution can’t just add another layer of control without a lot of integrative fixes. Try adding another computer at home and get it to play well with the original, and you see the problem.

Jaak Panksepp and Bjorn Merker have made arguments for brainstem regions to be involved with consciousness in ancestral species, including perhaps the reticular formation of the brainstem and midbrain, and the zona incerta. Panksepp has demonstrated beyond doubt that the PAG (brainstem & higher gray matter around the liquid-carrying aqueduct in the center of the brainstem) is involved in mother-infant attachment, distress cries, and cuddling/soothing/suckling behavior. That does not imply consciousness as such, but it suggests that closely related functions may be very ancient.

As for the sleep/waking/dreaming cycle, that’s all over the place with mammals, birds, and maybe other critters.

Understanding has causes, but the understanding itself screens off these causes, making them morally irrelevant. Thus understanding supervenes on its causes. To the extent that we have ability to act, which can have important nonconscious influences, moral responsibility emerges.

Thus we can have responsibility and agency, although not true free will, in a fully causal universe. (Any influence of quantum randomness merely dilutes ability.)

I believe that “Free Will” refers to the freedom or control condition on moral responsibility. Thus, because I believe that “guidance control” is precisely this condition, I believe that we can indeed have “true free will”, even in a fully causal–and fully causally deterministic–universe.

I’d be curious as to why you do not think we can have “true free will” in a causal universe. Is it because you think that “true free will” requires freedom to will and do otherwise, and you accept that causation (or causal determination) would rule out such freedom? Just curious.

John, If Free Will is just guidance control, does that mean it doesn’t matter how we come to be the way we are, or what we come to be — just so long as we have guidance control?

More generally: the idea of being able to bring things to consciousness seems extremely important. But the old problem of FW and MR remains. Suppose we somehow convert everyone so that they can, and indeed do, always bring everything to consciousness (situationist influences, Freudian influences, the lot). Some do good, given how they are, and some do evil, for the same reason. We can certainly reward the former and punish the latter. But we can’t claim that punishment and reward have anything more than a purely instrumental/pragmatic justification.

We would like to begin by thanking Gary Comstock and everyone else involved in “On the Human” for inviting us to contribute a target post and for all of their work in making this forum possible. We are also, of course, extremely grateful to everyone who has responded to our paper thus far for their stimulating and insightful comments.

Eddy Nahmias, Thomas Nadelhoffer, Gilbert Harman, Jesse Prinz, and others voice variants of the concern that our account does not address the fact that people are still behaving “badly” in certain situations. We do not deny that certain social psychological studies appealed to by situationists, and people’s behavior in them, can be disconcerting – at times, they certainly are (the famous experiments of Milgram and Zimbardo are vivid examples). But the point of our argument was not to say that people behave ideally regardless of the situation; one would not need elegant social psychology experiments to know that this is false. Rather, our aim was to counter the claim that everyday situational factors, often processed nonconsciously, are capable of undermining control and, with it, responsibility. If the requirements for control are understood neurobiologically, as we are suggesting, we see little reason that the mere fact that someone behaves “badly” in the face of everyday situational pressures (e.g., being more likely to help another individual after finding a dime or being exposed to the aroma of freshly baked cookies) should confer upon her diminished responsibility. On our account, then, people in situations such as those in Isen and Levin’s (1972) “dime-finding” and “cookie” experiments are in control and are responsible; the fact that the situational factors in question exerted their influence nonconsciously does not undermine control or responsibility. (We return below to the issue of just how pervasive large effects by seemingly trivial situational factors are, particularly in real-world rather than experimental conditions.)

We very much agree with Nahmias that an Aristotelian understanding of action emphasizing the development of (automatic) habits of behavior has much to recommend it in light of the ever-growing body of data on the pervasiveness of nonconscious cognition. Certainly, it is much more in accord with the data than a position which demands substantial conscious involvement at the time of action for control/responsibility to be present. (Bernard Baars, too, argues for an explicitly Aristotelian view of action in his reply.)

Aristotelian approaches to issues in action theory could come in different flavors depending on the inclinations and other philosophical commitments of the theorist. One possibility, of particular relevance to the remarks of John Fischer, is that this approach could be used to construct a reasons-responsiveness framework that is more resilient in the face of situationist challenges and the social psychological data underlying them. There are (at least) two virtues of this approach worth noting. First, as Nahmias points out, it could provide a way around the psychologically problematic requirement of conscious acknowledgement of and reflection on reasons at the time of action. And second, it could furnish the resources to for the reasons-responsiveness theorist to address the “unreliable-motivation-detector” problem that Fischer notes as an important situationist threat to current reasons-responsiveness approaches. To flesh this out a bit, perhaps a reasons-responsiveness theorist, by shifting focus to appropriate (and often nonconscious) habits of motivation and responsiveness to reasons, would be able to drop the requirement that, at the moment of action, we must be consciously aware of (and correct about) what is motivating that action. Indeed, as Martin Roth suggests, one could even go further and say that the reasons to which nonconscious processes are sensitive need not be ones which we would consciously acknowledge as reasons were we to become (consciously) aware of them. (We should reiterate that, as Fischer notes, we are not ourselves proponents of reasons-responsiveness approaches, but this does not preclude theorists who do incline toward such views from pursuing the approaches just sketched.)

Neil Levy raises an important potential objection to our view, namely that the prevailing psychological and philosophical notions of control do not map onto one another. He is correct that our starting point is psychological and neurobiological research on control (in the sense of effective guidance of actions, goal maintenance in the face of perturbations, etc.). Also correct is his claim that the sort of control philosophers are interested in does indeed require conscious control (this is what we were aiming to capture in our paper with the “neo-Kantian” conception of control). However, the Neo-Kantian picture, with its requirement of consciousness/reflection for control, is precisely what we’re arguing against. Related to this, we take issue with Levy’s claim that nonconscious processes are “subpersonal”. They are, of course, subconscious, but this can only be equated with being subpersonal if one takes the person/agent to be restricted to the conscious sphere of cognition. But as we describe in the next two paragraphs, we believe that this view is becoming less and less tenable as data on human cognition, action, and development accumulate.

A worry that has been raised by a number of commentators is nicely summed up by Nadelhoffer as follows: “In short, if we associate the moral agent with the conscious self, and if it turns out that the conscious self has far less control over human behavior than we traditionally assumed, then it is unclear why our notion of moral agency shouldn’t shrink accordingly.” We agree that this is a worry if one takes on board the traditional philosophical equation of the agent with the conscious agent. But as with the issue of responsibility-supporting control, there are two ways one can go when faced with data of the sort presented by certain social psychology experiments. The first option is to stick with the standard notion of control/agency (what Nadelhoffer describes as the “associat[ion] of the moral agent with the conscious self”) and adjust our view of the sphere of agency, control, responsibility, and so forth in light of the social psychological data. This is the option Nadelhoffer, Doris, and others champion; they suggest that the range of situations in which people exercise moral agency (are morally responsible, etc.) may be significantly smaller than previously thought.

The second option, which we prefer and believe to be more consistent with what a broader range of scientific data tells us about human cognition and action, is to adjust the philosophical notions (of agency, control, responsibility, etc.) so that they are more in line with the totality of the data. In the paper currently under discussion, we aimed to set out this argument as it applies to control and responsibility. However, in a paper we are currently working on, we aim to extend this line of reasoning to the notion of agency itself, arguing that the restriction of the agent to the conscious portion of a person’s cognitive activity is unwarranted. Rather, what we propose is that the agent should be viewed as a unified whole encompassing both conscious and nonconscious processes (and the interactions and gradations between them, since as Baumeister points out nearly all cognitive and physiological processes will involve some combination of conscious and nonconscious contributions rather than relying entirely on one or the other). This is, admittedly, a substantial departure from the traditional philosophical picture of the moral agent, but we see little reason that philosophical views on human agency and action should not be responsive to what science tells us about human cognition and action.

Jesse Prinz remarks that “the issue of consciousness strikes [him] as a red herring”. Not surprisingly, we disagree with this assessment. Although one could perhaps formulate a version of the challenge from social psychology to control and responsibility which makes no mention of consciousness (as Prinz attempts to do with his example of the Milgram experiments), for many philosophers concerned with agency, control, and responsibility, the issue of consciousness is in fact paramount. Levy’s reply to our post, for example, nicely articulates the philosophical view of consciousness as a requirement for actions that are capable of supporting attributions of moral agency, control, and responsibility. Although, as described in our reply to Levy, we are not inclined toward this picture of action, his remarks, as well as those of others such as Fischer and Nadelhoffer, provide a clear illustration that the consciousness requirement for moral agency is one which many philosophers interested in moral responsibility take very seriously.

Prinz argues that “the types of control furnished by evolution (delaying gratification, carrying out a multistep task, slowing acquiring a skill, etc.) are no safeguard against situational influence”. What we are taking issue with, however, is precisely the notion that in order to be in control (morally responsible, a moral agent, etc.) one must be safeguarded against situational influence. This is most neatly captured in the final sentence of our post, where we say that “[s]o long as control-relevant anatomical structures are intact and the neurochemicals on which their functionality depends are within their appropriate ranges, sensitivity to situational contingencies and nonconscious processes are appropriate aspects of control and goal-directed behavior, not obstacles to them”.

Prinz also describes individuals as “being manipulated like marionettes by external variables”. Yet the use of ‘manipulated’ once more seems to beg the question against the sort of position captured in the concluding sentence of our post. What Prinz and others sympathetic to the situationist position are calling ‘manipulation’ we regard as highly useful sensitivity to one’s environment.

Furthermore, saying, as Prinz does, that people are being “radically influenced by small situational factors” and “manipulated like marionettes by external variables” itself radically overstates the influence of minor situational factors in everyday action or, for that matter, in the vast bulk of laboratory-based social psychology experiments. A naïve reader, upon encountering Doris, Prinz, Nadelhoffer, and others’ presentations of the social psychological data, might come away with the impression that she need only walk around with a plateful of cookies or a pocketful of dimes in order to exert total control over the people around her. But as an anonymous referee on our paper, self-identified as a social psychologist, commented, the vast bulk of effects found in the social psychology literature are nowhere near as large as those in the handful of studies situationists tend to focus on. Moreover, in contrast to the social psychology laboratory, where conditions are carefully tailored to minimize the influence of all factors other than the one(s) under study, in real-world environments, one is confronted by a numerous, and highly various, situational stimuli, internally maintained goals, demands on attention, and so on, each of which may contribute to behavior. As a result, one must be very cautious when extrapolating from the magnitude of particular variables’ effects under carefully controlled laboratory conditions to their magnitude in noisy real-world conditions. (Subliminal priming, which is a staple of the psychologist’s laboratory toolkit, provides an instructive example. Despite fears among many in the general population that powerful subliminal messages are embedded in advertisements for products or political candidates, and hopes for the effectiveness of self-help recordings employing subliminal messages, there is little or no evidence that real-world subliminal messages of this sort actually have any effects on people’s behavior – see, e.g., Merikle [1988], Theus [1994], and Trappey [1998].)

After writing the previous paragraph, we saw Baumeister’s response, in which he puts these points about effect sizes and generalization outside the laboratory even more elegantly and concisely than we did. He writes: “Regarding Frail Control: Doris is correct that situational causes exert an influence over behavior. But large effects are rare. Most effects just shift the odds slightly. Large effects depend on everything else being carefully screened out and on conscious attention being systematically managed so as not to interfere (and often to cooperate). The situational effects of social psychology allow plenty of room for conscious control.” Seeing the results of social psychology in this way provides a bulwark against the tendency among certain researchers (both philosophers and psychologists) and popular science writers to jump from exciting findings in cognitive science to sweeping statements about their implications for free will, control, and other crucial concepts. Furthermore, as the last sentence of the quote from Baumeister, as well as other portions of his response, suggests, this more nuanced understanding of the relationship between social psychological results and real-world action opens the door for views which see a place for both conscious and nonconscious processes in agency. This, as noted above, is precisely the sort of position we are developing in a paper currently in preparation.

Nadelhoffer, Prinz, and others bring up Milgram’s famous obedience experiments and historical atrocities such as the Holocaust as examples of situations in which responses to situational factors may mitigate responsibility. Yet appeals to such cases miss the point of our argument entirely. As we note at various points in our article, our aim is not to say that people are fully responsible in every situation whatsoever; rather, our target is the claim that seemingly trivial situational factors undermine control and responsibility (again recall Prinz’s remarks about people being “manipulated like marionettes”). Whether the soldiers in Nazi Germany who carried out the Holocaust – being under enormous social pressures, aware that their livelihood and even lives could be threatened by dissent, and so forth – can be considered fully responsible for their actions is of course a very important and interesting question. However, it is also one which is orthogonal to our argument, since it deals with far-outside-the-norm situational pressures and stressors. The goal of our argument was, instead, to counter the notion that “even in unexceptional conditions, humans have little control over their behavior” and are therefore not (or less) responsible for that behavior and its consequences. This denial that situationist arguments warrant a dramatic expansion of the range of responsibility-mitigating circumstances is perfectly compatible with some responsibility-mitigating circumstances existing, with the extreme cases cited by Prinz and others being a case in point.

Nahmias, Hansen, Fischer, and others point out that certain prominent psychologists and neuroscientists are at least as guilty as philosophers of making extravagant claims about the implications of scientific findings for free will, control, and other issues in action theory. With this we fully agree. We chose Doris’s situationism as our target because we believe it to be the most forceful and sophisticated attempt to use findings from empirical psychology to challenge common notions of agency, control, and responsibility. For reasons of space, we were only briefly able to acknowledge other proponents of “Frail Control” hypotheses, including psychologists such as Dan Wegner and (at least in some of his writings) John Bargh. But our own brief mention of these other researchers does not mean that their views (and others, such as those of Benjamin Libet) do not deserve deeper scrutiny for their oversimplification of the relationship between scientific findings and freedom of the will (control, responsibility, etc.). As Fischer notes, in the case of psychologists and neuroscientists making pronouncements on whether or not people have free will, the problem is less a failure to take into account a sufficiently broad range of scientific evidence than the use of somewhat loose or ill-defined notions of free will and other philosophical concepts.

Daryl Cameron, building on remarks by Nahmias and Nadelhoffer, suggests that the situationist challenge may need to be reframed; in particular, he proposes that “the real challenge from social psychology is not to self-control, but instead to self-knowledge”. This version of the social psychological challenge we fully endorse. Social psychologists have provided a wealth of evidence that conscious self-knowledge (in the form of introspective access to motives, influences on action, etc.) is nothing like as transparent or incorrigible as our lay theories would have us believe. (These conclusions are underwritten not just by social psychological findings but by work in neuroscience and other fields. Michael Gazzaniga’s findings concerning confabulation by callosum sectioned [“split-brain”] patients provide one especially striking example.)

While agreeing with Cameron and others that psychological findings are a grave threat to folk theories about the accuracy and extent of (conscious) self-knowledge, we wish to resist the move to claims that they are also a substantial threat to agency, responsibility, and so on. The principal reason for this was noted earlier: we believe that scientific findings point toward a conception of the agent which encompasses both conscious and nonconscious cognition. While seemingly radical at first glance and no doubt at odds with much of contemporary philosophy, this broader view of agency and action has, as Baars points out, both ancient roots and certain modern manifestations. He illustrates this point with the example of a groggy philosopher making a fatal error while pulling out of her driveway in the morning. Baars suggests that perhaps the criminal law, which does not accept as an excuse for traffic accidents, assault, fraud, or drunk driving the mere fact that nonconscious influences were involved, gets things basically right in this regard.

References:

Isen, A.M. & Levin, P.F. (1972). Effect of Feeling Good on Helping: Cookies and Kindness. Journal of Personality and Social Psychology, 21, 384–88.

Thanks for the question. You ask whether it does not matter how an agent came to be the way he or she is, whether, that is, all that matters (for me [with respect to acting freely]) is whether the agent displays guidance control. My reply is that I have always argued for an essentially historicist conception of acting freely and moral responsibility. Guidance control is analyzed in terms of reasons-responsiveness and ownership, where ownership is explicitly a historical notion. Now, whether I have the history just right is certainly disputable; perhaps we would disagree about how far back along the sequence one needs to penetrate in ascertaining moral responsibility. My account is at least robustly historicist, but I do not have as strong a view as yours here (I think).

Is it wrong the way we think of conscious endorsement as special? Short answer: YES
Is there something special, after all, about conscious endorsement? Short answer: YES

I.
I would like to begin by thanking Gary Comstock for inviting me to participate in this discussion. I’ve enjoyed reading Chris and Pat’s insightful article and the thoughtful and helpful comments following it. I find something true on both sides. On the one hand, I agree with the authors’ point that conscious control is not always necessary for responsibility, and non-conscious control, given certain conditions, can be quite sufficient for grounding responsibility. If the situationist literature shows only that our actions are often the result of non-conscious processes, it certainly does not threaten our responsibility practice. On the other hand, I also agree with the critics’ point. Situationist literature shows more than that: it demonstrates that our actions are often significantly influenced by causal processes that we cannot consciously, reflectively endorse. Here is where the challenge to responsibility lies: As long as we think “conscious endorsement” as an important, if not necessary, condition for free action, situationist literature shows that we lack certain important control over our actions.

II.
However, I do think the critics so far miss an important point Chris and Pat are trying to make in their article, that is, “what is so significant about the distinction of conscious vs. non-conscious control?” In light of the current dialogue, we can rephrase this point in terms of endorsement. So, the question: What is so special about conscious endorsement? Just to clarify, I have no doubt that our folk psychology endows tremendous importance to this “conscious endorsement” criterion, and the critics rightfully assume it. However, the authors can still argue that it is time to “eliminate” such an unjustifiable criterion in light of our best scientific knowledge about mind.
First, we can observe that the conscious endorsement criterion is probably rooted in an out-dated, unscientific picture of mind, namely, the Cartesian Theater model. According to this model, consciousness is where the “real self” resides; thus, what is endorsed consciously is endorsed by the “real self”. I suspect this is how most of the “oomph” of this criterion is derived. To be fair, no one really takes the Cartesian Theater model seriously anymore (among the critics at least). However, there is still the tendency to think of the brain as composed of a conscious system and a set of non-conscious systems, with the conscious system equipped with its own set of values, deliberation process, perceptions, and control, not unlike a mini-agent in the brain, which one identifies as one’s real self. Consciously endorsing a causal process is like a pilot’s endorsing the autopilot devise. “How else do I make the device and its actions MINE?”, one may ask.
I take this is exactly what Chris and Pat want to challenge: Our best scientific understanding of mind does not support the above pictures; hence, the “conscious endorsement” criterion, however cherished by the folks, cannot be justified. Science tells us that our (access) conscious processes are rather like an empty stage that is waiting to be put on a show (the Global Worksapce theory). The non-conscious processes determine what information gets on stage to be made accessible to all other non-conscious processes. There is not a separate set of values, memory, and control that belongs to the conscious processes. The non-conscious processes determine what perceptual information enters the stage, what values are recalled, what the next step of deliberation goes, what control the conscious process will exert. Given this picture, it is wrong to think that when we separate the conscious processes from the non-conscious ones, we will get anything remotely resembling an agent. So, how can “conscious endorsement” be anything special, when what is endorsed consciously is also determined by the non-conscious processes. Why is “unconscious endorsement”, whatever it is, not enough?
Another way to put it, the agent cannot be identified only with the conscious process in the brain, because the agent is the whole brain conscious or not (if not also the body and part of the environment). If it is what our best science tells us, on what ground can we begin to disown the action influenced significantly by OUR brain processes, even if they are not consciously endorsed? Maybe it is time to give up this criterion based on nothing but bad metaphysics, anyone?

III.
If the above argument at least seems convincing, Chris and Pat are right to focus on anatomical and physiological criteria of control, while ignoring or remain agnostic about how these criteria fit into our conceptions of conscious and non-conscious controls. However, I do think there is something special about “conscious endorsement” , even if its specialness cannot be grounded the way folks tend to think it does. That is, I believe our best science does justify treating “conscious endorsement” as special. I will be not able to argue fully for this point in this short comment; however, let me at least point out how the argument could go.
First, what is consciously endorsed (usually) stand for the agent as a whole more. I am partly following Neil Levy’s point here (also see his comment). Because the information in the global workspace is broadcast to various non-conscious processes, and the non-conscious processes reactions to this piece of information can be sent back into the workspace to be further negotiated and compromised. What is consciously endorsed, especially after prolonged reflection, tends to reflect the agent’s various non-conscious processes, hence the agent as a whole more. I agree with Jesse Prinz’s point earlier that what is at issue here is “reflective endorsement” (which entails conscious endorsement), rather than “mere conscious endorsement”: an information that is merely conscious without being properly negotiated among all the non-conscious processes may not represent the agent any more than an insulated non-conscious response does.
Second, what is consciously endorsed tends to shape our non-conscious processes in the long term, and it acts as our expectation for ourselves and our commitment to others that we struggle to live up to in the short term. A sincere self-proclaimed egalitarianist (even if one only consciously endorses this ideal privately) will “usually” take steps to eliminate his/her non-conscious bias, or at least prevent them from being expressed. If conscious endorsement can play such a significant role in our psychology and in our social interaction, that is, it predicts what one is likely to become and behave both in long and short terms, we are justified to take the “conscious endorsement” criterion as special.*

In sum, I hope to suggest that our best scientific understanding is compatible with our (somehow qualified)folk conviction of the importance of conscious endorsement. We should let the distinction of conscious/non-conscious endorsement guide our further scientific and philosophical investigations into responsibility and free will. Unfortunately, it also implies that we still need to worry about the situationist challenge, and what it means to the theory of agency.

*[Allow me to assume a just-so story here about the evolution of conscious processes (the second argument does not depend on it, but it will help boost the argument if true) It is quite plausible that our conscious processes emerge for the purpose of communication and cooperation in group; what enters into conscious process can be readily expressed in language or other means of communication, which in turn boosts our ability to cooperate with each other. Cooperation depends on (mostly) truthful communication, which pose a so-called “commitment problem”: how do we know the agent will do as he/she says. A psychological mechanism is evolved or developed to (partly) help with this annoying problem—if the agent come to have a desire to change him/herself in accordance with what he/she publicly endorses (in the long term), or at least live up to it (in the short term), there will be less worry for this commitment problem, and we can continue cooperating happily with each other. Because our conscious process has this evolutionary root, the conscious endorsement is an internal version of public endorsement, and there is no wonder we feel the urge to shape ourselves and our behaviors accordingly.]

Thanks for the reply, Chris. I don’t see where I suggested that unconscious mechanisms = subpersonal mechanisms as a definitional claim. I made an empirical claim: consciousness is a global workspace, whereby subpersonal mechanisms communicate with one another such that only actions that have consciousness in their causal history (very recent causal history in all cases in which the action is not habitual) are fully expressive of the agent. I just don’t see *any* evidence, either in your response or in the original article, that even begins to suggest that this empirical claim is false. Rather, your evidence bears on whether unconscious processes can be flexible and responsive to environmental cues. In short, I claimed that consciousness was necessary for freedom-level control, though not for controlled processes; you replied that consciousness was not necessary for freedom-level control and cited in defense of the claim the evidence that it not necessary for controlled processes.

John,
Chandra Sripada has some been doing some interesting work on what he calls the “Deep Self Model” of the attribution of agency and responsibility. More specifically, he has evidence that suggests that we care a great deal about the concordance (or lack thereof) between the public foot people put forward and the boundaries of their deep self. Whether this view sheds light on how we think and talk about reasons remains to be seen. It seemed worth mentioning either way. Minimally, Sripada has provided us with a method of analysis–namely, structural equation modeling–that holds out the promise for enabling us to test the kind of view Daryl attributes to Velleman. The empirical element of this issue revolves around tracking the salient intuitions and judgments about reasons and their proper relationship with varying levels of the self. To see what he’s been working on, check out the following two posts:http://experimentalphilosophy.typepad.com/experimental_philosophy/2010/02/telling-more-than-we-can-know-about-intentional-action.htmlhttp://agencyandresponsibility.typepad.com/flickers-of-freedom/2010/05/more-on-manipulation.html

Patricia and Christopher,

Thanks for your illuminating reply. I just wanted to briefly recast my worry in the following way: You are ultimately arguing for a revisionist picture when it comes to how we should conceptualize control in light of the gathering data from the sciences of the mind. In short, you want to neurobiologize our traditional notions of control and agency so that they can accommodate the gathering data on the important etiological role played by non-conscious mental states and processes. That’s fine as far as it goes. However, the worry I raised in my earlier comments was that if one wants to revise our traditional notions of agency and control to accommodate the gathering data, then it is unclear why we would still cling to our traditional notions of moral responsibility.

Now, you are certainly correct that at this juncture we have a decision to make. One choice involves restricting our traditional notion of responsibility in light of our pared down notions of agency and control. This is the route taken in various ways by Doris, Harman, myself, and others. The other choice involves expanding our traditional notion of responsibility to cover some non-conscious mental states and processes. This seems to be the route you prefer. However, the real challenge for your view is to provide a revisionist account of moral agency and desert that enables you to distinguish the culpable from the excusable that does not itself depend on the traditional distinction between conscious intents, desires, beliefs, actions, reasons, etc. and their unconscious or non-conscious counterparts. Moreover, you will need to expand the notion of moral desert in such a way that it penetrates down to the realm of the non-conscious. As it stands, I don’t know what this kind of desert would look like.

Of course, I am admittedly skeptical of the notion of desert more generally, so perhaps the present case is simply a reflection of my more general worries! That being said, I nevertheless think that your expanded notion of desert is likely to be even more puzzling than the traditional notion of desert since your revised account will necessarily involve justifiably making people suffer for acting on reasons that were perhaps essentially beyond the reach of conscious control. Now, you could simply jettison the notion of desert altogether and focus instead on some other notion of moral responsibility. I, for one, think that would be much easier that trying to develop a notion of desert that’s up to the task. But, like I already said, I am a skeptic about both free will and moral desert, so it’s no surprise I would prefer you to follow me down the path toward skepticism even while you’re trying to provide agency and responsibility with a firmer footing when it comes to the gathering data on the nature of human cognition and action.

As we learn more and more about the various influences on human behavior (conscious or otherwise, although I am skeptical that this is a meaningful distinction), the scenario Prof Strawson hypothesizes seems to be less and less fantastic. This suggests to me that in time the criminal justice system might move from an MR-punishment paradigm to a biological/social malfunction-therapy paradigm, in which case the many interesting issues raised in the essay and comments would apply as considerations in defining a program of therapy rather than in the increasingly difficult (and arguably decreasingly meaningful) determination of MR. In such a system, “therapy” might still include incarceration or even capital punishment since the intent would be primarily to benefit society and only secondarily to benefit the offender – and in some instances removal from society might be the only effective “therapy” available at the time. On the other hand, it might also include more aggressive early intervention, perhaps more palatable for those considered at risk once the stigma of “immoral” has been eliminated from prospective anti-social behavior.

Should this to come to pass, the (seemingly) slightly aggressive final sentence in Prof Strawson’s comment would translate into “But we can’t claim that [therapy has] anything more than a purely instrumental/pragmatic justification”, a rather benign observation.

Linus Huang makes two important points in his reply. The first is that our best available scientific understanding of the mind suggests that the conscious-nonconscious distinction is not anywhere near as neat as our folk psychological theories would have us believe. (Charles Wolverton, in his reply, expresses a similar doubt about the meaningfulness of the distinction between conscious and nonconscious influences on behavior.) This is indeed a point that was lingering in the background of our article, and it is one which we are seeking to develop more explicitly in our current work. In particular, as described in our earlier reply, we aim to argue that the agent should be viewed as encompassing both conscious and nonconscious aspects of cognition (and the gradations between them).

Huang’s second point concerns what implications accepting a picture of the agent along these lines would have. The first possibility he sets out is to eliminate the requirement of conscious endorsement for agency altogether and say that nonconscious endorsement alone can be enough. The second possibility, which Huang, Levy, and others favor, is to say that if Baars’s conceptualization of consciousness as a global workspace is correct, there is still reason to regard conscious endorsement as special. The reason for this special status, in a nutshell, is that conscious reflection allows for interaction between a wider range of inputs (both conscious and nonconscious), rendering the resulting action more reflective of the agent.

This brings us to Levy’s most recent reply, in which he reiterates his concern that research guided by the cognitive scientific conception of control (“controlled processes”) does not really bear on the philosophical conception of control (“freedom-level control”). Levy holds that “consciousness is a global workspace, whereby subpersonal mechanisms communicate with one another such that only actions that have consciousness in their causal history (very recent causal history in all cases in which the action is not habitual) are fully expressive of the agent”. He characterizes this claim as an empirical one, and the first part of it (regarding whether consciousness is a global workspace) no doubt is. But the second part (regarding what constitutes something’s being “fully expressive of the agent”) is much more an assertion of allegiance to a particular philosophical account of agency than an empirical claim. Indeed, it is quite unclear how one would even go about empirically evaluating the claim that “only actions that have consciousness in their causal history… are fully expressive of the agent”.

One empirical claim that Levy may be making is that from the perspective of “freedom-level control”, having conscious processes involved is always superior. But unless one takes this as merely a matter of definition, we see little reason to accept it. This leads us back to our original point of disagreement with Levy, who seems to regard work on controlled processes as having little bearing on what we consider necessary for “freedom-level control”. The burden of our argument in the paper under discussion is precisely that research from psychology, neuroscience, evolutionary biology, and other fields provides reason to think that nonconscious processes are capable of supporting the species of control required for moral agency and responsibility – Levy’s “freedom-level control”.

Moreover, even if the global workspace story is basically correct, that does not mean that nonconscious process are completely fragmented and local, lacking in coherent support from connections across the cortex (for more on brain connectivity, see Buller & Sporns, 2009). Nothing in the anatomy implies that nonconscious processes are not quite highly integrated and coherent, and the behavioral data on the sophistication of these processes suggest that blanket claims about conscious processes being more highly integrated and coherent are unwarranted. To be sure, some conscious processes may be widely integrated, and some nonconscious processes (e.g., in the retina) may not involve widespread activity, but it is quite probable that some nonconscious processes, such as those involved in social cognition, intellectual tasks (e.g., giving a lecture, rapidly exchanging arguments and counterarguments with a philosophical colleague), and skills (e.g., playing basketball or hockey), are very highly integrated indeed.

A further point worth briefly noting is that the utility of bringing nonconscious inputs into the conscious global workspace is not unqualified – there are many situations in which nonconscious processes are superior to or enjoy primacy over conscious ones. The case of skilled actions, discussed in our paper, is one clear example, but the superiority of nonconscious processes can extend to contexts where identifiable skills/habits are not operative (e.g., Dijksterhuis et al., 2006).

As noted earlier, Levy may be presenting the requirement of conscious involvement for “freedom-level control” as a matter of definition. But if having “freedom-level control” of a given action requires, across the board, substantial conscious involvement in close proximity to the initiation of that action, then it is hard, given the sophistication and utility of nonconscious processes operating without such conscious involvement, to see to see why anyone would want this sort of control.

Switching gears, we would like to thank Thomas Nadelhoffer for another insightful reply. Once more, we believe that his framing of the issues is spot-on: if one revises traditional notions of moral agency and control (whether along the lines we are suggesting or in some other way), then traditional notions of moral responsibility should shift with them. We are therefore quite willing to accept a shift in the notion of moral responsibility to accompany our proposed shift in the notions of control and agency. Merely endorsing such a shift is, of course, only a start, for as Nadelhoffer points out, the real work will be to develop a more detailed account of responsibility that makes appropriate distinctions between culpable and excusable actions, avoids a blanket “strict liability” policy wherein people are responsible for anything and everything they do, and addresses various other challenges. This task will be among those that occupy us in our future research. (We should mention that we share Nadelhoffer’s skepticism about moral desert and free will [at least in the agent-causal sense], so such notions, at least in their traditional forms, will be unlikely to appear in any account we develop.)

We wish to conclude by once more thanking Gary Comstock, Phillip Barron, and the others behind the scenes at OTH, as well as the terrific respondents to our article. We are under no illusions that the issues under discussion have been definitively resolved, but we hope that this forum has helped to advance the debate and that everyone involved has found it as enlightening as we have.