Is authority based on reasons to obey or reasons to agree to obey?

The plane has crashed. People are seriously injured, but you are not. A stewardess, directing things, barks an order at you, “Go get the first aid kit from the overhead compartment!”. Intuitively, you are obligated to follow her order, due to her issuing it, even if it is not the best order she could have made (maybe the first order of business should have been finding water or finding the radio to call for help). In other words, something about the situation gives her authority over you (and the other passengers).

On Estlund’s “normative consent” theory (perhaps more aptly named “obligatory consent” theory) [DEMOCRATIC AUTHORITY, ch. 7] the salient thing is that, in this situation, you are morally obligated to consent to obey her orders if presented with the chance to do so. This is what explains your moral obligation to obey. That the chance was not presented or that you otherwise did not consent (perhaps even by actively refusing to consent) does not matter. This is a particular version of hypothetical consent theory.

However, it might be that what explains the authority here (your obligation to obey) is simply relevant items of normative significance (e.g., that certain important things will get done only if such command-following occurs) favoring your obedience. Perhaps these same things favoring your agreeing to obey does no work at all. In which case, we have what is sometimes called a “direct normative justification of authority” (or “direct theory of authority”). It is hard to sort out these different explanatory proposals because the considerations that favor obedience typically favor agreeing to obey as well (and vice versa).

But here is a fanciful (but hopefully still useful) case in which these two favoring relationships come apart. Picture a moral world in which one’s honor is among the very most morally important things and one’s honor is compromised fatally by agreeing to obey another person. But the causal efficacy of obedience in achieving morally important outcomes remains. If all of that were the case, it seems to me that one would still have compelling moral reason to obey the stewardess even though one does not have compelling reason to agree to obey her. So there would be authority.

This suggests that reasons favoring agreeing to obey are not what does the important explanatory work, but rather reasons favoring the obedience itself. And hence that Estlund’s normative consent version of hypothetical consent theory is wrong. This case suggests as well (and more broadly) that hypothetical consent is relevant only as a way of indicating the constellation of considerations that generally favor both agreeing to obey (consent) and obedience. Ideal consent or the normative favoring of agreeing to obey (consent) drop out of any fundamental normative explanation of authority.

11 thoughts on “Is authority based on reasons to obey or reasons to agree to obey?”

I think I have some more fundamental problems with Estlund’s idea as you describe it. First, just how do obeying and agreeing to obey come apart? It’s not clear to me how they can come apart in the cases you describe. I might, of course, agree now to obey you in the future. But if I haven’t made such an agreement, and we find ourselves in the plane crash, and you’re the flight attendant telling me what to do — what would it be for me to agree to obey, as distinct from obeying? My sense is that there is an answer to this question, but I can’t see what it is. So this is really more of a question than an objection.

A deeper problem, though, seems to be one that I suspect arises for agreement-based theories in general: what reasons do I have to agree to φ that are not themselves reasons to φ? Of course, I might not have reason to φ independently of some agreement I make with others, because my reason to φ is contingent on other people doing something in return: I have no reason to show up to work early to monitor middle school kids in the lounge independently of my agreement with you that you will, in exchange, cover my usual lunch-time monitoring of middle school kids in the lounge. But in such a scenario, I have reason to show up early to monitor the kids, and my reasons to do that are the same as my reasons to agree to do it. I might also have other reasons to show up early and monitor the kids: maybe I just like hanging out with middle schoolers when I’m still on my first cup of coffee. But that’s an additional reason to show up early and watch the kids, not some reason to agree to do it that is not also a reason to do it. The point is not simply that agreeing to φ gives me reason to φ; that’s presumably true, but not the problem. What it seems like Estlund’s and similar theories need is a reason for me to agree to φ that is not also a reason to φ. But even if my reason in some case is a rather general consideration of procedural fairness, then that same consideration gives me reason to φ and to agree to φ, not a reason to agree to φ that is somehow not also a reason to φ. Matters only seem worse if we’re talking about moral obligations to agree. I have no moral obligation to agree to trade monitoring duties, but it makes sense to think that I acquire a moral obligation to do the early duty because I agreed to do it. If I’m supposed to have a moral obligation to agree to φ, though, then φ’ing is not some option that I have good but indecisive reason to do until I acquire an obligation by agreeing to do it. Rather, I’ve got an obligation to agree all along, and whatever the reason for that, it seems hard to see how it could fail to be a reason to do it as well as a reason to agree to do it.

But all that would just be another way of arguing for your conclusion: reasons favoring agreeing to obey are not what does the important explanatory work, but rather reasons favoring the obedience itself. It’s just that, if my objection is right, there couldn’t be reasons to agree to obey that were not themselves already reasons to obey.

I’m not feeling the intuition, so I guess I don’t get in on the ground floor with this one.

I mean, I think it would be reasonable for me to do as she asks unless some clearly preferable option occurs to me. But if some clearly preferable option should indeed occur to me, I wouldn’t feel constrained to forgo that option out of any sense of “awe of such a thing as I myself.”

What if the case is like this: you have pretty good, but not at all definitive, evidence that we should be calling for help on the radio first? (Or better: you have a .51 credence that her collective action-plan and command is in error.) Estlund would say, and I would agree, that in such a situation there would be (at least some small degree of) authority because there is obligation (or at least reasons) to obey that outstrips (one’s take on) the merits of the command; and so the argument now is just over the boundaries of authority in the situation (though if only reasons to obey but no obligation to obey is generated, this might be some kind of authority-light, not proper authority). If you imagine a scenario like this do you have the relevant (minimal, perhaps comically minimal) authority intuition? I do think Estlund is thinking that most of us will have intuitions of a much more robust sort of authority involving obedience in the face of at least certain clear errors (both factually and in the eyes of the person who would obey).

I suspect that our intuitions here are sensitive to at least (a) degree of reliability of the authority figure, (b) the importance of getting a task done in short order or in an orderly and efficient way and (c) one’s own confidence in and evaluative take on relying on one’s own judgments both generally and with regard to one’s level of expertise relevant to the situation. But I also suspect that, as with accepting testimony, part of the normative reality here is determined by (d) motivations (of intrinsic aversion to failure to obey in relevant sorts of situations) that most or us are born with and that tend to get strengthened and justified by learning and socialization (but might not be, or might not be as much, if one has and is – perhaps virtuously – resisting the “authoritarian” aspects of socialization). If this last thing is right, then it would make sense that, in relevant sorts of situations, most of us non-instrumentally value obeying the right sort of command-givers and so we are not simply responding to the instrumental value or valuation (of social interactions or institutions characterized by relevant patterns of order-giving and obedience). And some way of doing this (non-instrumentally valuing some sort of obedience) might be best for most or all of us.

However, some of us will, justifiably or not, value obedience to relevant command-givers merely instrumentally (or at least claim to – as I say, I’m skeptical that anyone does not feel bad about disobeying relevant commands from relevant sorts of folks and skeptical that – once moderated and rationalized – such a reaction is not seen to be more a service than a detriment). In principle, this instrumental mind-set (or “official policy here” stance) is consistent with having reason to follow commands that one judges to be in error (e.g. because the overall efficiency of a vitally important social process, or many such processes, will be undermined if one tries to correct small errors in particular instances by pushing otherwise efficient command-and-obey social arrangements into committee meetings). This mind-set is perhaps more in tension (but still consistent with) such reasons constituting obligations to obey (which I suppose you need to get authority in the paradigmatic sense).

I don’t think I understand what you mean by ‘non-instrumentally value.’ I do not think I non-instrumentally value acts of obedience. I am not sure any normal person does, even people who see obedience as such as immensely virtuous. That’s because I take ‘non-instrumentally value’ to mean ‘value as an end,’ and I don’t see how obedience could be an end. It’s not that I don’t see how it could be an end psychologically, it’s that I don’t see how it could conceivably be regarded as an end. To be conceivably regarded as an end, it would have to be able to provide an intelligible answer to the question ‘why are you doing that?’ But ‘to obey’ or ‘because she said so’ is not by itself an intelligible answer, even though in many pragmatic contexts we take it as one. It invites the obvious follow-up question, ‘but why obey? why do what she said?’ Feelings of guilt at disobeying, or of disapproval when someone else does, do not provide an intelligible answer either. By contrast, ‘because it tastes good’ is an intelligible answer to the question ‘why are you eating that?’ even if we can go on to give decisive reasons why you shouldn’t eat it. So too, an intelligible answer to why you are obeying does not give us a theoretical defense of obedience to authority, but it does at least identify a point in obeying. I don’t think it will do simply to say that we’re socialized into obedience; we’re socialized into it by becoming habituated to take act in certain ways, but acting in certain ways involves acting for an intelligible purpose. Natural enough answers to the question in any given case are ‘because I’ll get in trouble if I don’t’ and ‘because we need to do what authority figures tell us in order for co-operation to work properly.’ Perhaps neither is a good reason, but those strike me as the actual reasons most people have for obeying when they obey. What you call ‘intrinsic aversion to failure to obey’ seems to me like it might just be a kind of internalized version of trouble that we’re avoiding getting ourselves into, though, as I can attest from my own personal experience, not one that is especially difficult for many kids between the ages of 12 and 18 to get over. If there’s something being non-instrumentally valued here, it seems to be other people’s approval of me, with obedience as a means to preserving that.

There is a difference between regarding failure to obey (in these circumstances, if this sort of person commands, if the commands are reasonable enough, etc.) as something to be avoided (but not because it promotes something else that is to be avoided) and regarding failure to obey as something to be avoided because if I do this others will disapprove or get me in trouble. Maybe these two things often go together (or get mixed up). But it seems easy enough to disentangle them: just cook up a case in which disobedience will not result in disapproval or one getting in trouble, but one responds in the same way or regards it in the same way. You seem to think that such a stance (even though psychologically possible) is not intelligible or rational because ‘because she said so’ is not much of a reason (but ‘because if I disobey I’ll get in trouble’ is). I see the point here, but what does the work when one says ‘because she said so’ is the context (this person, in that context, issuing these sorts of commands). And once we make this explicit, this sort of reason is perfectly intelligible and rational.

I’m appealing to something like the idea of non-instrumentally-valuing-x-in-context-C. Maybe there is something wrong with this idea. I don’t think so. I think of this in terms of immediate motivational (or “valuational”) response to something (no processing of instrumental information required) – but only in a certain context. Maybe this is a strange or non-standard idea of non-instrumental/intrinsic concern, valuation, etc.?

I am thinking that the standard sort of discomfort about disobeying would be rationalized with ‘because I just should obey here’ not ‘if I disobey, I’ll get in trouble’. Maybe that is wrong and maybe these motivations often coexist or get mixed up in our psychology.

I’m late getting back to this, but I think the best I can do is to respond that I still don’t see that the response ‘because she said so’ is really intelligible. In certain contexts, it might be, but that’s because those contexts import some further good to be achieved or harm to be avoided, and we simply don’t need to mention it in ordinary discourse because it’s understood (or we think it’s understood). I was unclear when I said that I thought this sort of response was otherwise unintelligible, but “not that I don’t see how it could be an end psychologically” — I didn’t mean to imply that I do see how obedience could be an end psychologically, I meant to imply that the apparent unintelligibility of taking obedience as an end goes deeper than contingent psychological weirdness and into outright incoherence given what obedience is and what it is to pursue something as an end. I take it to be uncontroversially familiar that people can be disposed to obey even when disobedience wouldn’t be discovered and they can’t offer any explicit reason beyond “she said so”; what I can’t understand is how “she said so” or “in order to obey my morally perfect mother” or something like that could really be an end that a person desired for its own sake. I don’t have any special psychological theory to offer of what people obsessed with obedience are really after, but I do not see how it could be obedience for the sake of obedience, even obedience-in-this-context-to-this-sort-of-person for its own sake. Perhaps people obsessed with obedience have internalized the expectation of disapproval (or something like that) and therefore feel or anticipate feeling guilty if they disobey. Avoiding painful feelings is an entirely intelligible end. But then we’re not obeying because we value obedience as an end, we’re obeying because we value the avoidance of painful feelings as an end, and obeying is a way to avoid painful feelings. I don’t know whether anyone actually obeys purely for reasons of that sort, but it’s not wholly implausible. It’s at least intelligible, because it relates obedience to an intelligible end, viz. approval from people of the sort I approve, admire, or what not, and the avoidance of their disapproval.

So I think I agree with you that the context is what’s doing the work, but it seems to me that the work the context does is to bring with it a further point to obedience, a further point without which being motivated to obey would not simply be contingently psychologically impossible for human beings, but something more like impossible because there is no conceptually coherent description of it.

For whatever it’s worth, my intuition is that most people who obey without extended deliberation obey out of some combination of attempting to avoid punishment, including disapproval, and the idea that obedience to recognized authority figures is necessary for functional society. Perhaps human beings in general are too readily given to obedience, but as I said, I think if that’s true, it’s because we’re generally given to thinking of obedience in certain ways, not because we have some kind of intrinsic attraction to obeying for its own sake, an attraction that, as I say, I don’t find intelligible.

Even if I’m right, it may not pose any serious problems for your overall aims in your recent series of posts, because if I understand you correctly you’re mainly trying to give an account of why we should in fact recognize authority or something like it, and I gather you aren’t going to be justifying it by treating it as somehow an end worth choosing for its own sake.

Thanks, David. I think we are largely on the same page. Reasons are not ends, but you need ends to have reasons and a reason is a reason only relative to some end. So if someone gives ‘because she said so’ as a reason, this is consistent with the relevant end or ends being something other than doing as she says.

Some people do just enjoy being ordered around. But I don’t think this is how we are equipped with the proper motivation to achieve otherwise-not-achievable benefits of being able to form reliable command-and-obey structures of cooperation. I suppose any number of non-instrumental motivations could accomplish this, but wanting not to get in trouble with one’s fellows (combined some sort of desire to make trouble with one’s fellows if they are not following instructions) might do the trick. Maybe it is more complicated, but I think there are any number of ways to achieve the relevant functionality (whatever the upshot for reasons, rationality and values – and I think there is one).

I suspect that part of the problem here is that your initial example is underspecified, so that there’s a temptation to fill in the missing bits. And depending on how you fill in the missing bits, intuitions change. This, I take it, is what Roderick is getting at.

1. Suppose that the flight attendant orders you to get the first aid kit, but doing so is obviously the wrong thing to do. Then the intuition that you should obey her seems to me to evaporate entirely–unless there is some sense in which you tacitly agreed to obey her by getting on the plane in the first place. But even so, that tacit consent might be defeated in precisely this sort of extreme case (even if it was originally designed for this case). (It’s actually possible that you expressly consent to obey the cabin crew; it’s probably buried in the fine print of your ticket.)

2. Suppose the attendant orders you to get the first aid kit. Doing so is not the optimal thing to under the circumstances, but it is close enough, and you reckon that you’d rather satisfice than undermine her authority at a time like this (“authority” in the loose sense of her being regarded as the “leader” of the remaining survivors). In this case, the intuition that you should obey her seems plausible, but the source of the plausibility is ambiguous.

a. Even though, other things equal, getting the first aid kid rather than something else is a sub-optimal course of action, other things aren’t equal: there is the further consideration of having a single leader. Given this further consideration, all things considered, getting the first aid kid is the best thing to do, but that is because “getting the first aid kit” is really elliptical for “performing-the-action-that-is-all-things-best-for-us-under-the-circumstances-which-puts-a-premium-on-having-an-unambiguous-authority-figure-even-if-she-gives-somewhat-suboptimal-orders.” In this case, however, getting the first aid kit is just rational, full stop, as is obeying the flight attendant. So her “authority” is just grounded in the rational requirement to take the best available action under the circumstances.

b. Getting the first aid kit is the suboptimal course of action, full stop. But you’re obliged to take it because an authority figured has ordered it, full stop.

(2a) reduces the attendant’s authority to rationality.

(2b) leaves the attendant’s authority unreduced. But even if you take this line, it really only has plausibility if getting the first aid kit is at least in the ballpark of the right thing to do. If the flight attendant suddenly decided that a plane crash was the time for mindfulness meditations, the plausibility of her authority once again evaporates, so that we’re just back to (1).

So the example is susceptible of different interpretations, and you’d have to stipulate that no at of tacit (or express) consent went into it from the outset.

On reading this post, I couldn’t help thinking of William Golding’s Lord of the Flies, where the point is that under conditions like a plane crash, there are no real authorities. Conditions are too remote from those in which we have authorities for us to retain a connection to the concept. At first it seems that the character Ralph is the “leader” of the group, which he wins in a vote based on non-rational considerations. But once the going gets rough for everyone, that initial authority decays and is eventually lost to a more brutishly irrational group of boys. Eventually, you get a descent into a Hobbesian State of Nature. When “authority” eventually shows up at the end of the book (in the form of a naval rescue mission), its apparent “authority” derives mostly from the fact that it consists of adults capable of getting the kids out of the situation they’re in.

I pretty much agree with all of that. I needed to fill the case in more, to get (2b), which is what I had in mind.

I find (2a) fascinating. Can we competently order options based on (i) choosing among the best local options (i.e., first aid kit first or not) and (ii) the effective-collective-action-based need to have a de facto authority relationship (even if some of the commands are in error)? How bad does an order have to be (assuming a generally competent order-giver) in order to justify undermining authority by (publicly) disobeying? There are more or less rational answers here, but most often we seem to lack the relevant information – or maybe the relevant information is indeterminate. (How will others react if I disobey like this? What about if I disobey like that instead? Will our competent, needed leader survive?) We end up with a lot of uncertainty and many options tied at hell-I-don’t-know-but-this-option-doesn’t-suck-out-loud status. I’m tempted to say that we get what you have in mind in (2a) only in a highly-artificial sort of scenario; in realistic scenarios, we are quite unsure about how to order our options with these two competing values in mind.

Perhaps because of this problem, I think there is something of a collective action problem: even if everyone in a group were ideally rational, at least in realistic informational conditions, we would fail to obey enough to get effective leadership and therefore be at a severe disadvantage in cooperating to achieve ends beneficial to all. And this might explain our having innate motivational capacities in the direction of obedience (direct motivation to obey, if certain conditions are met in the circumstance and in the order-giver). This is at least a big part of a solution to the collective action problem (that natural and perhaps cultural evolution might select for). What would such obedience-related ends need to be like to solve the problem (and do we have ends like this)? How would/do we properly rationalize desires to obey (given that they would be/are probably rather crude tools for flexibly achieving hierarchical-style cooperation)?

I have to respond to that with an anecdote that will at first seem totally irrelevant, but isn’t.

When I was a kid, I had a friend who locked his bike to a bike rack that (we later discovered) had a wasp’s nest on it. When he went back to retrieve his bike, accompanied by a bunch of us, he made this discovery and was too afraid to unlock the bike and retrieve it. As he stood there, cringing in fear at the wasps, one of his “friends” taunted him: “What are you afraid of? A bunch of insects?”

I have an analogous response to your comments on (2a). What are afraid of? A bunch of epistemic problems?

To be serious for just a few seconds: I’d admit that the epistemic problems you mention are real, but don’t think they’re irresolvable, and thus regard (2a) as a real option. The puzzle about (2b) is even larger, unless it’s tacitly spelled out in a way that reduces back to (2a): why is it individually rational to adopt a collective action framework?

In the interests of full disclosure, I should point out that under peer pressure, my friend did try to unlock his bike. He was then, inevitably, attacked by wasps.

Yes, that is (a version of) the important question: why is it individually rational to adopt a collective action framework? Or better: how, if something like (2a) is disastrously bad at achieving important collective or group states/effects, do we otherwise rationalize collective aims?

I agree that (2a) presents no particular problem for individual rationality. It might, but need not and I think in many cases does not. By and large, we are able to make rational guesses about balancing out these two elements, sometimes quite good ones.

I suspect that, from a functional and motivational standpoint, these sorts of collective action problems get solved by our having “groupish” non-instrumental desires (and perhaps associated response-patterns). The question is how to square these with what I take to be the inherently individual framework of instrumental reason (including the “fast” process of instrumental reason-responsiveness as distinct from explicit reasoning). I don’t really have much of a solution to this problem and I’m not even entirely confident that I’m thinking about it exactly right.