The Appeal to Intuition: A Fallacy?

You might be familiar with what philosophers call an “appeal to nature“. It is a claim that something is good or bad because of how natural it is. Sometimes an appeal to nature is a fallacy. In this post, I discuss the possibility that an appeal to intuition is that kind of fallacy.

1. Different Brain, Different Intuition

First, imagine that your brain and my brain are radically different from one another. If this were the case, then it would be unsurprising to find that your intuitions were different than mine. Indeed, evidence suggests that even minor differences between brains are linked to differences in intuition (Amodio et al 2007, Kanai et al 2011).

This implies that our appeals to intuition (etc.) might be contingent upon brains being a certain way. In other words, differences in intuitions seem to be the result of differences in natural properties.†

2. The Appeal To Nature

It is not difficult to find an appeal to intuition in philosophy. For example,

The idea that [P] is so intuitive that most will need no more proof than its statement (Wenar 2008, 10).

Some claims seem true without any argument or evidence. Why do they seem true? Intuition. So, when someone implies that a claim can be true (or plausible) without providing argument or evidence for the claim, then they are probably appealing to intuition.

But does intuition tell us about what is true or plausible? Reliably? Approximately? Or does it tell us something else?

That is, if we appeal to widely shared intuitions, then we might ultimately be appealing to nature. That is, we might only be appealing to the fact that most people’s brains are similar in a certain way. But the fact that most brains’ structure and function produce certain intuitions is not a sign that those intuitions are true (or even plausible). After all, if everyone’s brains were changed, then our intuitions might change—even though truth would not change.

3. Explanation ≠ Justification

While natural properties can help us explain our intuitions, they do not justify them.†† This becomes more clear when we consider counterintuitive claims.

Some people try to show that utilitarianism is counterintuitive by pointing out that psychopaths are more likely to make utilitarian judgments (Bartels and Pizarro 2011).††† But it’s not clear how that amounts to an argument against utilitarianism. After all, the difference between our moral intuitions and a psychopath’s moral intuitions might be just a difference in brains (Glenn et al 2009, see also a response from Koenigs et al). So contrasting our intuitions with psychopaths’ intuitions ends up looking a bit like an appeal to nature. It appeals to the nature of neurotypical brains.

Objection

“Not so fast!” you might say. “Why should we think that the difference between me and a psychopaths’ intuitions is only a difference in brains? There are other differences. Namely, there are arguments that undermine psychopaths’ intuitions more than my intuitions” (this kind of response can be found in Deutsch 2015).

There are a couple claims here so let’s take them one at a time.

Responses

First, a concession. Neuroscience is far from fully explaining the differences in our intuitions. So it wouldn’t be surprising if there is more to our intuitions than what neuroscience tells us. Also, it’s not even clear that neuroscience can fully explain our intuitions.

Second, let’s think about those arguments against the psychopaths’ intuitions. What makes these arguments compelling? Is it similar to what makes something intuitive? What if the appeal of arguments — like intuition — can be explained by natural properties of our brains? If this is the case, then the mere existence of compelling arguments for an intuition doesn’t necessarily justify the intuition. It might only provide a post hoc rationalization of the intuition (Schwitzgebel and Ellis 2016).

Recap

It’s not obvious why one set of intuitions are supposed to be better than another set. Science reveals only that certain brains (normal brains, philosophers’ brains, etc.) produce certain intuitions (Kahane et al 2012). But that does not necessarily justify intuition. (And it doesn’t necessarily debunk intuition either.)

4. So Is An Appeal To Intuition A Fallacy?

It seems pretty clear that our intuitions can depend on nature — e.g., on our brains being a certain way. So an appeal to intuition can be an appeal to nature. And an appeal to nature can be a fallacy. But that’s not enough to conclude that every appeal to intuition is a fallacy.

So the short answer to our question is this: not necessarily.

Longer answers will fall into at least three categories

The necessary justification category: “No. Intuitions are justified because they are intuitions (and not because they are ubiquitous, reliable, useful, etc.). That is, intuitions are necessarily justified.”

The contingent justification category: “No. Intuitions are justified because of some accidental feature of intuitions (e.g., their ubiquity, reliability, utility, etc.). So intuitions happen to be justified, but they aren’t necessarily justified.”

The unjustified category. “Yes. An appeal to intuition is a fallacy because intuitions are not justified. They are only apparently justified. But when we study intuitions, we find that intuitions are the result of natural (neural) properties — nothing more. And, contra the appeal to nature, natural properties do not necessarily justify.”

I haven’t argued for any of these answers. I have merely tried to show that it’s not obvious how our intuitions can be justified. So you might wonder how you can argue for some of these answers.

5. The Self-Defeat Argument

The Self-Defeat Argument might be the easiest counterargument against the conclusion that intuitions are not justified. It’s pretty simple. Here’s the gist of the argument:

…the rejection of [intuition] is self-defeating, roughly, because one who rejected [intuition] would inevitably do so on the basis of [intuition]. […] Therefore, if this opponent of [intuition] were right, his [rejection of intuition] would itself be unjustified. (Huemer 2007, 39)

Sounds intuitive, right?

Well, before you get too comfortable with the Self-Defeat Argument, you should know that some philosophers don’t buy it (DePoe 2011; Mizrahi 2014). So if you want to accept the Self-Defeat Argument, then you might want to see how you can respond to those philosophers (Huemer 2011; Huemer 2014). And then you might want to see a counter-response (Mizrahi 2014).

Closing Thoughts

So what do you think? Are intuitions justified? How? Why? How can science help us answer that question?

If you’re new to the blog and you’re interested in more posts like this, then you can subscribe to the blog (in the menu) or follow me on social media.

14 thoughts on “The Appeal to Intuition: A Fallacy?”

I think you’re onto somethings that parallels what I’ve been thinking about recently.

Let’s, for the length of this comment, reverse things a bit. Suppose that the brain is instead built from from intuitions. That from the beginning of our lives as human beings, we each have different kinds of intuitions that influences how each of our brains gets structured and built around. Because really, what are intuitions? I believe they’re the intersection between all the information our brain processes that somehow makes enough “sense” for the subconscious to pass onto the conscious without the ability to transfer the vast of information required to “build” that intuition. I hope I’m making some sense so far. How do we perceive? Through senses. The five senses myth already being debunked, senses allow us to perceive the physical world and many others we cannot touch. Among others, emotions, conceptualization and symbolism are all intangible parts of our reality that create together: languages and the bases for society that we all can feel. They are all as real as the world we can touch. Imagine every human being somehow more “in tune” with a certain group of senses and the different kinds of intuitions it might create. It doesn’t make intuitions any more reliable than your theory, but it sounds more intuitive to me. Lol.

There is a lot in that comment. I’ll try to take a few smaller bites rather than go for the whole thing all at once.

A. The reversal idea is interesting: if we accept that the causal arrow goes the other way (from intuition to brain), then this might change things. But its hard to see how that is what is going on in the studies I mention above (e.g., Crockett). In those studies, experimenters manipulate something about in the brain and then a change in judgment occurs. So the direction of causation seems to be pretty clear. It goes from brain to judgment.

B. I’m not sure I understand what is supposed to follow from this reversal. For example, I don’t know what you mean by ‘intersection” in “intuitions are the intersection between all the information our brain processes.” And I’m not sure what ‘makes enough sense for the subconscious to pass not the conscious’ means in the rest of the sentence. And I’m not sure what being being “in tune” with senses and their resulting intuitions would mean.

C. I’m not sure I comprehend the discussion of the senses. How have the senses been debunked? How do senses perceive “other [worlds] we cannot touch?” What kind of worlds?

Feel free to clarify. If I understand it, I might also find it intuitive. :)

Thank you for the excellent survey of this interesting topic, and for the bibliography you provided. My personal view on intuitions has always been this one: we must, for practical reasons, stop the chain of justification at some point. Intuitions can play this role, and we have the right to consider them to be provisionally true until someone proves otherwise, and replaces the contested intuitions with new ones. And so on, ad infinitum. Intuitions are only there for practical reasons, and should be taken only as provisionally true. This is similar to the self-defeating argument, so I will enjoy reading what philosophers have to say about it. Cheers!

This is rather similar to Huemer’s view. So I think you’d like what Huemer has to to say about this. The Huemer 2007 paper is a good place to start. If you want more, then Huemer’s 2001 book is very good.

It’s an interesting argument, but I’d like to raise a couple of objections.

What about epistemic intuitions? For example, let’s say Bob is a Young Earth Creationist (YEC), but he is consistent. Since theory is (at least nearly always, and definitely when it’s about empirical matters) underdetermined by evidence, he consistently believes that YEC is true. He accepts all of the observations made by scientists, but interprets them as Lucifer planting fossils to tempt us, or Yahweh doing so to test us, etc. Now, imagine that his brain is wired radically different from yours, and he’s just assessing the evidence by his own epistemic intuitions (if you don’t call that “intuition”, my point would be he’s still doing it by whatever means his brain/mind does it). So, that explains his different intuitions. But intuitions can be wrong, and appeals to epistemic intuitions might be contingent upon brains being in a certain way. And epistemic assessments that do not use the word “intuition” might be also contingent upon brains being in a certain way, etc. My question would be: if the line of argument you raise against appeal to intuitions in the case of morality is successful, why is it not successful in the case of epistemic intuitions? Why would be the relevant difference? How can we even know moral truths? After all, it seems that we only have our sense of right and wrong to tell right from wrong, or some general theories, like utilitarianism in some form or another, or Christianity in some form or another, etc. But the general theories – as long as they do make predictions – can be tested only against our sense of right and wrong. I would say a proper way to test them is to consider hypothetical scenarios in which our sense of right and wrong delivers clear verdicts, and see whether the theory we’re testing matches the assessment. But is our sense of right and wrong different from our moral intuitions? What else is there? Perhaps, it’s partially a matter of terminology. What counts as an “intuition”? But it seems to me that considering hypothetical scenarios to see what our sense of right and wrong says is a test of the theory vs. our intuitions, in any way one can construct it. Moreover, if our brain were wired very differently (depending on how it’s wired), it’s pretty clear our sense of right and wrong (whether that’s what one calls “intuitions” or not) would deliver very different verdicts, or perhaps – if radically different enough, like some sort of alien -, it would not be a sense of right and wrong but some alien analog. What about color vision? If our brain is rewired in some ways, we could see colors very differently (that’s pretty clear; e.g., by the fact that nonhuman animals see colors very differently, or rather, they see species-specific analogues to color). Or we wouldn’t perceive color at all, but some analogue. But in any case, there would be a radical change too. In fact, there are people who do see colors differently, e.g., anomalous trichromats. Why is their color vision not reliable in tracking colors, and ours is reliable? (whether the difference is a difference in brains or in eyes does not make a difference to the argument, given the analogy with the appeal to nature).

Granted, maybe these replies are similar to a “self-defeat” argument, but I’m not using Huemer’s general line of argumentation, but narrowing it down to specific cases and making parallels.

Interesting objections. I’m not entirely sure I know what the target of the objection is since I didn’t take a position in the post.

…intuitions can be wrong, and appeals to epistemic intuitions might be contingent upon brains being in a certain way. And epistemic assessments that do not use the word “intuition” might be also contingent upon brains being in a certain way, etc.

I agree with all of this.

…if the line of argument you raise against appeal to intuitions in the case of morality is successful, why is it not successful in the case of epistemic intuitions? Why would be the relevant difference? How can we even know moral truths?

I don’t see why the argument would be less successful with epistemic intuitions than it is with moral intuitions.

…is our sense of right and wrong different from our moral intuitions? What else is there? Perhaps, it’s partially a matter of terminology. What counts as an “intuition”?

I don’t think that there is an obvious relevant difference in these kinds of intuition.

Also, by ‘intuition’, I mean automatic and unconscious judgments. In your cases, they would be the automatic and unconscious judgments that serve as the foundations of epistemic and moral theory.

“if our brain were wired very differently (depending on how it’s wired), it’s pretty clear our sense of right and wrong (whether that’s what one calls “intuitions” or not) would deliver very different verdicts, or perhaps – if radically different enough, like some sort of alien -, it would not be a sense of right and wrong but some alien analog.”

I think it’s clear that the verdicts could be different. But I don’t think it’s clear that the verdicts would fail to be a sense of right and wrong. Feel free to explain your reasons for the latter.

“…there are people who do see colors differently, e.g., anomalous trichromats. Why is their color vision not reliable in tracking colors, and ours is reliable?”

I’m not sure that our color vision is reliable. It might be sufficiently useful, but that is different. To say that color vision is reliable implies that we have verified the reliability of our color vision according to some independent and true measure of color. Have we done that? Can we do that?

But maybe you mean to say that two different color representations might nonetheless be tracking something in the world (e.g., a grayscale and color photograph of one and the same scene (from the same vantage point, at the same time, etc.) systematically capture the same color/light differences in the scene.). And if that is what you mean, then maybe your point is something like this: even if different brains have different intuitions, it doesn’t follow that one or the other set of intuitions misrepresents its environment. I would agree with that. Two different intuitions can successfully track features of the environment even if they represent them differently.

Although, I am unsure how to make sense of moral intuitions on that analogy. After all, it’s not clear that the world contains moral features. So it’s not clear if there is a viable analogy between perceptual representation and moral intuition.

The targets would be the arguments based on brain differences and the analogies with the naturalistic fallacy in support of the hypothesis that the appeal to intuitions is a fallacy, and/or that the appeal to intuitions lacks epistemic justification or is otherwise problematic. The idea is to use a “partners in innocence” response, combined with improbable conclusions if (one of) such hypothesis holds. Sorry if I gave the impression that you had argued that the arguments succeeded. I’m just arguing that they do not.

If you take no stance on whether the arguments in question succeed, I’m trying to convince you that they don’t. ;)

I don’t see why the argument would be less successful with epistemic intuitions than it is with moral intuitions.

But that would seem to be self-defeating, because our intuitive probabilistic assessments would fail, and so we would not be able to make any proper assessments whatsoever, except perhaps in math or logic (though almost certainly not even there, but leaving those aside). For example, if we reckon that it’s not the case that Jesus walked on water, or that the Moon Landing did happen, that the Earth is older than 10000 years, etc., we’re using our epistemic intuitions to make those assessments on the basis of observations that – as usual – do not determine them. After all, it’s consistent with all observations that Jesus walked on water, the Earth is less than 10000 years old, and the Moon Landing never happened. We just have to posit conspiracies, Lucifer or Yahweh messing with fossils for some reasons, or something else. Granted, all of that would be extremely improbable, but that’s again an intuitive probabilistic assessment. Granted, you might be using “intuition” and related words differently. However, the brain-based argument is not affected by that, because just as our intuitions (however you construe the word) could be very different if our brain were wired very differently, so would be our probabilistic assessments (called “intuitive” or called something else) regarding Jesus, the Moon Landing, evolution or anything else.

I think it’s clear that the verdicts could be different. But I don’t think it’s clear that the verdicts would fail to be a sense of right and wrong. Feel free to explain your reasons for the latter.

If aliens from another planet have color-like perceptions but associated with very different frequencies (e.g., their colors do not match with ours at all), they would have alien-color vision, not color vision, in the sense that if they say that if they visit Earth and (in their language) they say that two of the traffic lights are the same color*, they’re not saying they’re the same color and disagreeing with us. The alternative conclusion would be that there is disagreement and their color words would mean the same as ours, but in that case, that would support an error theory of color, since it would be extremely improbable that our color vision got it right: either there is no such thing as color and our statements attributing color are all false (substantive error theory), or we have no reliable means of detecting color if it exists (epistemic error theory). In my assessment, the former interpretation is far more probable – the aliens would have color* vision, or alien-color vision, or whatever we call it, but not color vision (and language, etc.).

What if they develop something more or less like morality, but their sense of alien-right and alien-wrong (and good and bad, etc.) is associated with things (behaviors, entities, or whatever) considerably different from ours, even if less different than the color* vision? Would the aliens have an unreliable moral sense, or would we? Or would they instead have a probably generally reliable moral* sense, rather than a moral one? I think it depends on the specific minds the aliens might have, but if there are real aliens, the latter is much more probable – else, I would say that would support a moral error theory.

I’m not sure that our color vision is reliable. It might be sufficiently useful, but that is different. To say that color vision is reliable implies that we have verified the reliability of our color vision according to some independent and true measure of color. Have we done that? Can we do that?

I’m not sure what “true measure” means, but what I mean by “reliable” is that our statements like “the light was red” are usually true.

But maybe you mean to say that two different color representations might nonetheless be tracking something in the world (e.g., a grayscale and color photograph of one and the same scene (from the same vantage point, at the same time, etc.) systematically capture the same color/light differences in the scene.).

I think that might happen, but it’s improbable. I think it’s more probable that they might be tracking different things and capturing different things. One system might capture color, another color**, another color***, etc.

Although, I am unsure how to make sense of moral intuitions on that analogy. After all, it’s not clear that the world contains moral features. So it’s not clear if there is a viable analogy between perceptual representation and moral intuition.

I do find it clear that the world contains moral features, e.g., it’s immoral for people to torture other people for fun, so there are actual people behaving immorally, there are morally bad people, etc. The features do not need to be features of things other than minds. Features such as pain, desire, pleasure, hatred, mental illness, are also features the world contains, as it contains minds/brains with some of those properties. In any case, I would say that the analogy with color is relevant with respect to the objections based on brain differences and the potential appeal to nature, because if those arguments undermined moral assessments, it would seem that they would similarly undermind color ones – or, from a different perspective, further argumentation would be required to distinguish them. Granted, it might be argued that our sense of color tracks features of the world but our moral sense does not. However, it seems to me that that would need argumentation. Moreover, even that would not seem to distinguish them in a way that is relevant to the argument based on different brains: one can still say that if the brains were wired very differently, color or color-like perception would be very different, and then ask why that wouldn’t lead to a color error theory, etc.

But that would seem to be self-defeating, because our intuitive probabilistic assessments would fail, and so we would not be able to make any proper assessments whatsoever…”

I agree. I should have added ‘insofar as the argument against moral intuitions is successful’ to my ‘I don’t see why the argument would be less successful with epistemic intuitions than it is with moral intuitions. So all I mean is that insofar as the argument against moral intuitions works, it also works against other intuitions, and therefore it defeats justification altogether.

RE: aliens, color, morality

I’m not sure what calculations go into your “far more probable” and the like, so I’m not sure what to make of these claims. But the distinctions you’re making are interesting.

I’m not sure what “true measure” means, but what I mean by “reliable” is that our statements like “the light was red” are usually true.

My point is just that in order to verify if something is reliable (in the “usually true” sense), we would need an independent and reliable measure of truth. But if we had that independent and reliable measure of truth, then determining the (reliability) justification of things (like intuitions) would be easy. Since that dermination isn’t easy, our claims about the (reliabilist) justification of color/moral/etc. intuitions/perceptions seem to be merely speculative. That’s not to say that they’re false. It’s just to say that I’m not confident that they’re true (as opposed to merely useful).

I do find it clear that the world contains moral features.

Perhaps we see the world differently. And maybe that’s a difference in our brains! ;)

So all I mean is that insofar as the argument against moral intuitions works, it also works against other intuitions, and therefore it defeats justification altogether.

Alright, so my conclusion from that would be that the argument against moral intuitions do not work (I don’t see how one can defeat justification altogether in a non-self-defeating manner). I will address the rest of your points, but this looks like the central issue.

I’m not sure what calculations go into your “far more probable” and the like, so I’m not sure what to make of these claims.

What I’m thinking is this: let’s say that aliens are such-and-such. Would they disagree with us, or would they be talking about something else? I think that the meaning of words is given by usage, and the aliens are using their words to mean other stuff.

My point is just that in order to verify if something is reliable (in the “usually true” sense), we would need an independent and reliable measure of truth.

I’m not sure what you mean by “independent”, or why would it have to be so general. Could you clarify those points, please?

But if we had that independent and reliable measure of truth, then determining the (reliability) justification of things (like intuitions) would be easy.

We might disagree about how difficult it is – or how probable the philosophical objections are -, but in any event, I don’t see why it would be easy. Present-day math is very difficult, but that does not change the fact that we have a generally reliable means of making mathematical assessments in daily context, even if it becomes less effective when the numbers become greater, etc.

Since that dermination isn’t easy, our claims about the (reliabilist) justification of color/moral/etc. intuitions/perceptions seem to be merely speculative.

I’m not sure what you mean by “(reliabilist) justification”; I only meant that in ordinary contexts, our color and moral assessments are generally correct. But I concede that if you’re a skeptic and disagree with that, you will very probably not find that assertion probable enough.

Perhaps we see the world differently. And maybe that’s a difference in our brains! ;)

It’s extremely probably so. I used to be more skeptical about morality by the way, but from different reasons (mostly a version from an argument from apparent disagreement; but now I think moral disagreement is not as extensive as I thought, given the wide background of shared moral assessments that are much less salient than the less common disagreements). But my brain changed!;)

Here’s what seems like the most efficient way to answer your question.

“Correct” and “true”

You say,

our color and moral assessments are generally correct.

I find myself wondering how we would know that our assessments are correct.

If we wanted to verify that an assessment is correct, then we need to know what is correct and then compare the assessment to what is correct. My point is that we have only assessments. We don’t have access to what is correct — the independent and reliable measure of truth.

Example: I’ll use math since you mentioned it. Our assessments in math are correct/incorrect depending on axioms. But who knows whether axioms are true? At bottom, our acceptance/rejection of axioms comes down to our intuition(s). I think the same goes for morality. Our moral assessments are based on axioms which are based on intuition.

So I don’t understand how we can say — with any confidence — that our assessments are correct full stop. They are only correct according to certain axioms.

Self-defeat

I don’t see how one can defeat justification altogether in a non-self-defeating manner

I think the idea would be to give up on justification altogether and treat arguments as merely hypothetical. E.g., “If [axioms and] P, then Q. And I am a quietist about the justification of the axioms or P of the inference from those to Q.”

At the beginning of your article you make a statement to the effect of ‘Imagine our brains are radically different’. That’s great when you are talking about imagining a fantasy world but such a condition can not exist! The genes instructing the formation of out neural circuits are some of the most conserved and any alteration/mutation will likely result in disaster due the incredible sensitivity of the developing nervous system. It seems far more likely that intuition is gleaned from experience (i.e. normal is dependent on the observer) and translated into specific neural pathways (synapses). I release this is a bit off subject from your article but it seems impossible to discuss any facet of how we think or perceive without discussing a biological basis. Otherwise we are just spouting theology.

People survive brain lesions and brain lesions can result in different moral intuitions — as I discuss in the second section of this post. So there is a non-fantasy, non-negligible probability that brain changes will not result in disaster and will also result in changes in our intuitions.

We need not accept a brain-experience dichotomy when explaining differences in intuition. Differences in intuition can be explained both by differences in experience and by differences in brains. Further, part of the explanation provided by the brain could be independent of the explanation provided by differences in experience.

I agree that although small changes to our brains could result in disaster. But I don’t know how you calculated the ‘likely’ in “will likely result in disaster”, so I am not sure what to make of that claim.

I find myself wondering how we would know that our assessments are correct.

I don’t often find myself wondering that, but I can if I choose to. However, it seems to me that one can raise that matter for any assessments whatsoever, not just color or moral assessments, but assessments about mathematics, philosophy, physics, astronomy, geology, chemistry, biology, and so on. I can say that we can check that they’re correct by our own lights, but of course, in doing so, we’re using our own intuitions and making probabilistic assessments (or similar ones). There is nothing else we can do. But I don’t think this is a threat to our ordinary assessments on any of those domains. In any event, I don’t think that there is any strong reason to single out color or morality in this context.

If we wanted to verify that an assessment is correct, then we need to know what is correct and then compare the assessment to what is correct. My point is that we have only assessments. We don’t have access to what is correct — the independent and reliable measure of truth.

I don’t see why it would have to be independent, but I would say that some of us know that (for example):

a. Assessments about the history of life on Earth made by biologists are generally correct (though “generally” does not imply always, of course), while assessments about the history of life on Earth made by Young Earth Creationists are not generally correct. b. Assessments about the history of space travel made by NASA on its website are generally correct, whereas assessments about the history of space travel made by Moon Landing conspiracy theorists are not generally correct. c. Mathematical assessments made by professional mathematicians while they’re doing their job are generally correct. Mathematical assessments made by highly intoxicated people are far more frequently incorrect.

Example: I’ll use math since you mentioned it. Our assessments in math are correct/incorrect depending on axioms. But who knows whether axioms are true? At bottom, our acceptance/rejection of axioms comes down to our intuition(s). I think the same goes for morality. Our moral assessments are based on axioms which are based on intuition.

I think if we take an axiomatic approach, the axioms are true of some domains (or abstract scenarios, or whatever we call them), not of others. We just choose which ones to pick because they’re interesting to us, often because we reckon they’re true in a domain we already are implicitly using, and trying to model (e.g., the class of all hereditary sets might be such a domain, or the set of natural numbers). In any case, I do agree that in the end it comes down to our intuitions. But I just do not see this as a big problem, unless we have specific reasons to doubt some of our intuitions – and in that case, the problem would be for those specific intuitions.

So I don’t understand how we can say — with any confidence — that our assessments are correct full stop. They are only correct according to certain axioms.

I don’t understand why saying that they’re correct full stop is a problem. But if it is, I don’t see how we could say that they’re correct according to certain axioms. After all, in order to say that, we also need to rely on our own cognitive abilities (intuitions, epistemic probabilistic assessments that come to us, or however we prefer to put it).

I think the idea would be to give up on justification altogether and treat arguments as merely hypothetical. E.g., “If [axioms and] P, then Q. And I am a quietist about the justification of the axioms or P of the inference from those to Q.”

Thanks for explaining. I don’t see how the hypothetical treatment would do any better, because we would still need to resort to some intuition or another (or whatever we call it), in order to tell that if axioms and P, then Q. Additionally, it’s clear to me that Young Earth Creationists are mistaken – and so are Christians, by the way -, and saying that they’re mistaken if some axioms, etc., would neither be what I usually want to say about it, nor describe my beliefs – i.e., I believe they’re mistaken full stop. I can’t bring myself to believe otherwise. And I don’t think I should if I could. Of course, those are assessments that I make by my own lights (and/or intuitions plus something else, or whatever one calls them), but as I mentioned before, I don’t see this as problematic (different brains and all ;-)).

Post navigation

Looks like you are using an ad blocker. I get it: Ads suck. I just wanted to let you know that you can support the website by either (a) adding this site to your whitelist in your ad blocker, (b) Patreon or PayPal, or (c) by sharing the website with your online friends. No pressure, obviously. Just letting you know how to support in case you already want to support. Happy internetting!